Thursday, September 19, 2024

Web Services Architecture

We can understand without difficulty why Web Services are produced if we will look at the computer and software world. There are many systems and platforms out there on the Internet and there are even more applications that are living on these systems and platforms. If we need more explanation about this, there are many technologies to connect clients to servers- including DCOM, CORBA and others, with Web Services growing on a new and much simpler type of connectivity, based on open standards such as HTTP, XML and SOAP.

Simply, we can imagine, Web Services is a component whose methods you can invoke using an internet or intranet connection or a Web Service is a component that exposes its interface via the Web. Web Services build on the recognition of XML in the open arena. Web Services use XML as the way for serializing data to receive from, or return to the client. A client that can parse XML can use the data returned, even the client and the Web Service host are using the different operating system, or the applications are developed with the different programming language. We will discuss Web Services more completely later in this article. Let us look at the full story.

Web Services will not necessarily replace component development; components make a lot of sense in an intranet solution where the environment is controlled, and it does not make sense to escape purely internal objects through a less efficient web service interface. Web Services make interoperating easy and effective, but they are not as fast as a binary proprietary protocol such as DCOM. The problem with components is distributing them across the Internet. In this article, I assume that you have the knowledge of distributed component technology and basic knowledge of middle-tier programming.

A Short History to the Enterprise Programming Model

The enterprise environment in which present companies now find themselves has undergone major change over the last few decades. As USA and global markets have opened up, a war has started, and in order to survive, companies have to make better and improved use of the tools in their area. In the early days of the software world in the workplace, applications were confined to single computers, greatly limiting the problem domain that they could operate in. Many early desktop solutions relied on local copies of the company’s databases being stored on each computer, where they would be accessed and manipulated separately.

As the technology improved, and applications became more sophisticated, solutions to this arrangement started to appear based on an online central database accessed over a network. Often, the workload of performing the tasks requested by users is shared between the desktop machines and the servers. This model is known as distributed computing.

Many recent enterprise solutions cover several distributed applications, where the workload is shared between client and server according to the environment of the task and the power of the client machine. These applications, running on fast local networks, are prepared to be extended to provide access through the Internet. The Client-Server term is a little primitive for the networked systems established in today’s world of business, but the principal notion of a separation between client and server is still applicable. Many such applications can still be thought of as client-server, although their original model is more sophisticated, such as the n-tier arrangement.

Enterprise Application Integration Architecture

EAI builds on proven middleware techniques such as message brokering and data transformation, and introduces architectural components called adapters for communication with applications and other data sources. EAI also incorporates business process modeling and workflow, metadata management, security and system administration, and monitoring. Working in concert, this set of services provides a robust environment for integrating disparate applications within and across enterprises.

Enterprise Distributed Computing

Instead of running on a single computer, large-scale systems execute on a number of different machines. One simple reason is to distribute processing. Other reasons for distributing functionality among different physical machines include the ability to separate user-interface functionality from business logic, increase system robustness, and add security features. By separating user-interface functionality from business logic, you can more easily partition user-interface development from business logic. This makes development and management of the code much easier. Once you have developed your business logic, you can reuse it later. For example, two different user-interface applications can make use of the same business logic without changes to the business-logic code.

Robustness is another important reason for separating the user interface from the back-end business logic. In a client/server application, the server may service a hundred clients. If one of these clients causes a general protection fault, it should not affect the execution of the server or the other ninety-nine clients. In addition, when you split up user interface, business, and data logic, you may add security features to your system. For instance, your server can provide security functionality that authenticates the client each time it connects to the server. You can even add security logic to authorize every subsequent request that the authenticated client sends. These reasons make distributed computing very elegant.

There are many benefits to this model, as we have seen from the substantial increase in internet companies and internet/intranet applications in the last few years.

Component Technology

All this means that while the distributed computing uprising was taking place, the importance of component technology was also rising. The idea that supported this technology was interface based programming. A component would publish a particular interface that could be used to interact with it. This interface was a contract that was guaranteed to remain in place. Other developers could develop using these interfaces, confident that future changes to the component would not break their code.

The component interfaces were in a binary standard, giving developers the choice to use different programming languages for the component and the client. COM/DCOM and CORBA have done very well in this arena.

On the other hand, Web Services are not necessarily going to replace component development. Components make a lot of sense in an intranet solution where the environment is controlled, and it does not make sense to expose purely internal objects through a less efficient web service interface. Web Services make interoperating easy and effective, but they are not as fast as a binary proprietary protocol such as DCOM.

Distributed object computing extends an object-oriented programming system by allowing objects to be distributed across a heterogeneous network, so that each of these distributed object components interoperate as a unified total. These objects may be distributed on different computers throughout a network, living within their own address space outside of an application, and yet appear as though they were local to an application.

Three of the most popular distributed object paradigms are Microsoft’s Distributed Component Object Model (DCOM), OMG’s Common Object Request Broker Architecture (CORBA) and JavaSoft’s Java/Remote Method Invocation (Java/RMI). Now we will have some more information about COM/DCOM, COM+ and CORBA for a clean understanding of component and distributed programming techniques.

COM/DCOM, COM+

Component-based development has become the software concept, with its own conferences, magazines, and consultants. However, the original component technology, and the one that is by far the most widely used, is Microsoft’s Component Object Model (COM). Introduced in 1993, COM is now a mature foundation for component-based development, and it is the rare application for Windows and Windows NT that does not use COM in some way. With its integrated services, and its excellent tool support from Microsoft and others, COM makes it easy to develop powerful component-based applications.

From its original application on a single machine, COM has expanded to allow access to components on other systems. Distributed COM (DCOM), introduced in 1996, makes it possible to create networked applications built from components. Available on various versions of UNIX, IBM mainframes, and other systems, DCOM is used today in applications ranging from innovative medical technology to traditional accounting and human resources systems. Once a leading edge technology, distributed components have gone mainstream, and the primary technologies enabling this are COM and DCOM.

A COM server can create object instances of multiple object classes. A COM object can support multiple interfaces, each representing a different view or behavior of the object. An interface consists of a set of functionally related methods. A COM client interacts with a COM object by acquiring a pointer to one of the object’s interfaces and invoking methods through that pointer, as if the object resides in the client’s address space. COM specifies that any interface must follow a standard memory layout, which is the same as the C++ virtual function table. Since the specification is at the binary level, it allows integration of binary components possibly written in different programming languages such as C++, Java and Visual Basic.

The evolution of Microsoft component services continues with COM+. By enhancing and extending existing services, COM+ further increases the value these services provide. COM+ includes:

  • A publish and subscribe service – Provides a general event mechanism that allows multiple clients to “subscribe” to various “published” events. When the publisher fires an event, the COM+ Events system iterates through the subscription database and notifies all subscribers.
  • Queued components – Allows clients to invoke methods on COM components using an asynchronous model. Such as model is particularly useful in on unreliable networks and in disconnected usage scenarios.
  • Dynamic Load balancing – Automatically spreads client requests across multiple equivalent COM components.
  • Full integration of MTS into COM – Includes broader support for attribute-based programming, improvements in existing services such as Transactions, Security and Administration, as well as improved interoperability with other transaction environments through support for the Transaction Internet Protocol (TIP).
  • COM+ builds on what already exists-it is not a revolutionary departure. Microsoft component services provide an infrastructure for building enterprise applications, and enterprises seldom welcome revolutions in their infrastructure. However, software technology cannot stand still. The goal, then, must be to provide useful innovations that make it easier to create great applications without disrupting what is already in place. By extending and further unifying the existing component services, COM+ does exactly this.

Taken as a whole, Microsoft component services provide a powerful, flexible, and easy-to-use platform for building distributed applications. Nothing else available offers the same level of integration, broad tool support, and solid services.

CORBA

You may read this on several CORBA articles; CORBA is a distributed object framework proposed by a consortium of hunderds companies called the Object Management Group (OMG). The core of the CORBA architecture is the Object Request Broker (ORB) that acts as the object bus over which objects transparently interact with other objects located locally or remotely. A CORBA object is represented to the outside world by an interface with a set of methods. A particular instance of an object is identified by an object reference. The client of a CORBA object acquires its object reference and uses it as a handle to make method calls, as if the object is located in the client’s address space. The ORB is responsible for all the mechanisms required to find the object’s implementation, prepare it to receive the request, communicate the request to it, and carry the reply if any back to the client. The object implementation interacts with the ORB through either an Object Adapter (OA) or through the ORB interface.

  • CORBA objects can be located anywhere on a network.
  • CORBA objects can interoperate with objects written on other platforms.
  • CORBA objects can be written in any programming language for which there is a mapping from IDL to that language.

Developers use CORBA to distribute applications across client-server, peer-to-peer, 3-tier, n-tier, Internet/Intranet etc. networks. Instead of having hundreds of thousands of lines of code running on mainframe computers with dumb terminals, smaller, more robust applications that communicate between file servers and workstations are now necessary. To keep this distribution of applications simple, a plug-and- play architecture is necessary to distribute the client-server (CS) applications. Although the computing trend seems to be towards the peer-to-peer computing. The developer then can write apps that work independently across various platforms and diverse networks. The idea behind CORBA is a software intermediary that handles and disperses access requests on data sets. This intermediary is referred to as an Object Request Broker (ORB). The ORB interacts and makes requests to differing objects. It sits on the host between the data and the application layer (that is, one level lower than the application layer (level 7 in the OSI model). An ORB negotiates between request messages from objects or object servers and the affiliated data sets.

Limitation of DCOM and CORBA

Perhaps, we can talk about all other distributed object technologies such as Sun’s RMI (Remote Method Invocation) protocol. However, this article’s objective is not talk all detail about existing technologies. Therefore, we choose two strongest alternatives to discuss what the problem on existing gear was. In that case, let us talk about why we need a new technology. Unfortunately, existing technologies have some cruel limitations that have dissatisfied or at least made more complex, several existing projects.

The biggest problem about DCOM and CORBA is that both are platform specific. Both were not easy integrated with each other. You may create a kind of bridge process that translates messages from one to the other. However, this system already has some difficulty because of DCOM and CORBA functionality, data types etc. An important key barrier is communication over the Internet. The distributed communication technologies described earlier have a symmetrical requirement, meaning that both ends of the communication link would typically need to have implemented the same distributed object model. In the Internet, nobody can promise that both ends of the communication link will have implemented the same distributed object model. Promising this on the Internet is risky and normally just impossible.

Another issue about limitation is Firewalls and Proxy servers. DCOM and CORBA are not firewall and proxy friendly. Both architectures typically force them to listen to port numbers, which is not without problems. The challenge with proxy servers is that clients using these protocols typically require having a direct connection to the server. In general, firewalls do not give permission (in case of security) to keep open many ports, except some frequently used ones. For instance, these ports are HTTP and SMTP.

Furthermore, as CORBA and DCOM are respectable protocols, the business world has not yet moved completely to one in exacting. All sides pointing out the other’s shortcoming mark the lack of worldwide commerce recognition. With other words, some models, for example DCOM, CORBA as well as RMI for Java, work very well in an intranet environment. These technologies provide components to be invoked over network connections, as a result, make possible distributed application development. In a pure environment each of these works well, but interoperating with the other protocols, none is very successful. For example, using DCOM, Java components cannot be called, and COM objects cannot be invoked using RMI. Attempting to use these technologies over the Internet presents even more difficulty. Firewalls often block access to the required TCP/IP ports, and because they are proprietary formats both the client and server must be running compatible software.

An Architectural Overview to Web Services

One of the most important rewards of the XML Web services architecture is that it allows programs developed in different programming languages on different platforms to communicate with each other in a standards-based technique. There are two different methods to work with Web Services; to allow access to internal system functionality, exposing them to the outside world and as a client, or consumer of external Web Services. In this model, Web Services are used to access functionality from any tier in an application. This provides the possiblity for any distributed system exposed on the Internet to be incorporated into a custom application.

In general, the architecture of a Web Service divided into five logical layers such as Data Layer, Data Access Layer, Business Layer, Business Faade and Listener. The Listener is nearest to client, and the layer furthest from the client is the data layer. The business layer is further divided in to two sub layers named Business logic and Business faade. Any physical data that the Web Service requires is stored in the data layer. Above the data layer is the data access layer, which presents a logical vision of the physical data to the business logic. The data access layer isolates business logic from changes to the underlying data stores and ensures the integrity of the data. The business faade provides a simple interface, which maps directly to operations exposed by the Web Service.

Business facade block is constantly used to provide a dependable interface to the underlying business objects and to isolate the client from changes to the underlying business logic. When it is present, it lives whichever involving the client and the business logic or involving the Web service projects and the business logic layers.

The business logic layer provides services for the business faade’s use. All the business logic might be implemented by the business faade in a simple Web Service, which would interact directly with the data access layer. Web Service client applications interact with the Web Service listener. The listener is responsible for receiving incoming messages, which contain requests for services, parsing the messages, and dispatching the requests to the appropriate methods in the business faade.

This architecture is very similar to the n-tier application architecture defined by Windows DNA(Windows Distributed interNet Application). The Web Service listener is equivalent to the presentation layer of a Windows DNA application. The listener is responsible for packaging the response from the business faade into a message and sending that back to the client, if the service returns a response. The listener also handles requests for Web Service contracts and other documents about the Web Service. By adding a Web Service listener parallel to the presentation layer and providing it the access to the existing business faade, it is easy to migrate a Windows DNA application to a Web Service. While web browser clients can continue to use the presentation layer web Service client applications will interact with the listener.

Web Services Stack

We can start to talk with HTTP (HyperText Transfer Protocol); with this communication protocol, it is possible to send information from one point on the Internet to another point. The information that is sent over the wire can be structured by using XML (eXtensible Markup Language). The XML protocol defines the format and the semantics of the information. XML is a fundamental foundation for the later layers. SOAP (Simple Object Access Protocol) is a protocol that defines how to invoke function calls from objects that live in different environments.

During SOAP use, it is possible to defeat problems that come up when trying to integrate different operating systems, object models, and programming languages. With SOAP, it becomes easy integrate different category of business processes.

HTTP, XML, and SOAP can be seen as the core layers for Web Services. These layers define how Web Services have to interact with each other. These three protocols have been accepted by the W3C (World Wide Web Consortium, http://www.w3.org/) as standards.

The protocol WSDL (Web Services Description Language) describes how to communicate with a Web Service. In the WSDL definition, different types of communication (bindings) are allowed. Its one thing to have developed a Web Service, but it is another to earn money with it as well. In order to do this, we need a central market place where we can publish our Web Service, so other parties can find it and use it. This is where UDDI (Universal Description, Discovery and Integration) comes in.

Web Services technology can be generally classified into three key assemblies, Description Stack, Discovery Stack and Wire Stack. The description stack deals with a wide range of technologies that describe Web Services in order to facilitate their common use for business process modeling and workflow composition in B2B relationships. The discovery stack deals with technologies that allow for directory, discovery, and inspection services. The wire stack consists of technologies that provide the steam for the runtime engines of Web Services.

Building Blocks

Let us start like this; Web Services are building blocks for constructing distributed Web-based applications in a platform, object model, and Multilanguage manner. Web Services are based on open Internet standards, such as HTTP and XML, and form the basis of Microsoft’s vision of the programmable Web. What are the “Building Blocks”? SOAP, WSDL and UDDI.

SOAP

Now we will discuss why SOAP is a fundamental building block of Web Services.

SOAP is a standard way of assuring that the information needed to invoke services located on remote systems can be sent over a network to the remote system in a format the remote system can understand, regardless of what platform the remote service runs on or what language it is written in. Fundamentally, SOAP is an XML-based protocol that is designed to exchange structured and typed information on the Web. SOAP can be used in combination with a variety of existing Internet protocols and formats including HTTP, SMTP, and MIME and can support a wide range of applications from messaging systems to RPC.

SOAP addresses the issue of passing information to and from remote applications through firewalls. Firewalls, as the paper points out, often prohibit remote communication through ports other than some predefined, well-known ports, reserved for a specific purpose. This becomes an issue as most distributed protocols do not use assigned ports, but dynamically select them. The solution, as implemented by Microsoft’s SOAP technology, is to pass a call to a remote process through port 80, the port assigned for HTTP traffic. The remote calls are attached on top of the HTTP protocol using XML to define the format of the request or response messages. Among the advantages of this technology is clearly the ease at which firewall complications can be avoided. Cross platform compatibility also comes to mind, although this wasn’t discussed beyond a passing comment in the article. A drawback might be in some inefficiency as port 80 is a commonly used port used for all web traffic through the server.

SOAP has been developed to solve an aging problem with developing applications for the Internet: interoperability. Imagine a world where you can access objects and services on remote (or local) servers in a platform-independent manner. Today’s world is infected with different operating systems, different firewalls, different methods of making remote procedure calls, and different platforms. In order to interoperate across the Internet both the client and server need to understand each others security types and trusts, service deployment schemas, and implementation details, not to mention speak the same platform language (e.g. COM to COM, ORB to ORB, EJB to EJB, etc.). With SOAP, an end to all of this platform-specific confusion has arrived. Based on the already industry wide accepted IETF HTTP standard and W3C XML standard, SOAP bridges the opening between competing object RPC technologies and provides a light-weight messaging format that works with any operating system, any programming language, and any platform.

There are three main parts in the SOAP architecture.

  • An envelope that describes the contents of a message and how to process it.
  • A set of encoding rules for expressing instances of application-defined datatypes.
  • A convention for representing remote procedure calls and responses.

Simply declared, SOAP provides a technique to access services, objects, and servers in a completely platform-independent mode. By SOAP, you will be capable of query, invoke, communicate with, and otherwise handle services provided on remote systems without regard to the remote system’s location, operating system, or platform.

SOAP by itself provides a way to exchange messages with Web Services, but it does not provide a way to find out what messages a Web Service might want to exchange. It also, does not give you any way of finding Web Services or negotiating with them.

WSDL

The Web Services Description Language (WSDL) along with SOAP, form the essential building blocks for Web Services. WSDL is an XML-based format for describing Web Services. It describes which operations Web Services can execute and the format of the messages Web Services can send and receive. A WSDL document can be considered as a contract between a client and a server. With WSDL-aware tools, you can also automate this process, enabling applications to simply integrate new services with little or no manual code. WSDL therefore represents a cornerstone of the web service architecture, because it provides a common language for describing services and a platform for automatically integrating those services.

While most WSDL documents are used in RPC-style request/response pairs, WSDL as well supports one-way messages. WSDL supports the same four types of operations that SOAP messages accomplish that request-response, solicit-response, one-way, and notification.

UDDI

UDDI (Universal Description, Discovery, and Integration) is the building block that will enable businesses to quickly, easily and dynamically find and transact business with one another using their preferred applications. UDDI is an XML-based registry for businesses worldwide to list them on the Internet. Its ultimate goal is to streamline online transactions by enabling companies to find one another on the Web and make their systems interoperable for e-commerce. UDDI is often compared to a telephone book’s white, yellow, and green pages. The project allows businesses to list themselves by name, product, location, or the Web services they offer.

The UDDI focus is on providing large organizations the means to reach out to and manage their network of smaller business customers. The biggest issues facing UDDI are ones of acceptance and buy-in from businesses themselves, and implementation issues of scalability and physical implementation.

The UDDI Consortium, established by hundreds of companies, emerged in response to a series of challenges posed by the new Web Services model. These challenges included the how do you discover Web Services, how should information about Web Services be categorized. These challenges also included how do you handle the global nature of Web Services, how do you provide for localization and how can interoperability be provided, both in the discovery and invocation mechanisms. In addition, it included how you could interact with the discovery and invocation mechanisms at runtime.

In favor of UDDI to provide the foundation for Web Services registries, therefore, it had to serve two primary roles within the Web Services model such as Service publication and service discovery.

Describing Web Services

We just talked about WSDL, now we will discuss describing Web Services as an introduction. We will not go too deeply into the issues in this article, because this article already considers that you have at minimum more than a beginner’s information.

Describing Web Services needs a language such as WSDL. Since SOAP’s conception, the designers planned for it to support a type system. The one they chose was XML Schema. The SOAP specification enables you to describe type information in one of two manners. The first way relies on the use of the xsi: type attribute within your SOAP messages. In this manner, each message can be self-describing so that the receiving end understands how to interpret the message parameters and their associated types.

The second manner enables the sender and receiver to rely on some form of schema to be referenced from an external but unspecified source. In this manner, the sender and the receiver are interacting based on a well-defined contract of types.

An XML schema alone does not provide all the information that we need concerning Web Services. However, SOAP serialization is troubled, and XML schema datatype is sufficient. Other features about Web Services need to be addressed, therefore, WSDL was created.

WSDL is an XML language, which uses several layers of abstraction to describe Web Services in a modular technique. Furthermore, WSDL has a vocabulary that enables us to create independent datatype definitions, abstract message definitions, and service definitions. After they are defined, the abstractions can be bound to concrete message formats, transport protocols, and endpoints to complete the overall package.

Essentially, WSDL defines an XML grammar that describes Web Services as collections of communications endpoints that are able to exchange messages with each other. However, the Microsoft implementation requires another file to map the invoked Web Service operations to COM object method calls. This additional file is expressed in the Web Services Markup Language (WSML), which is Microsoft’s proprietary language for this particular purpose. Besides, the Microsoft SOAP Toolkit generates WSML files automatically.

Publishing Web Services

What does mean to publish Web Services? UDDI uses WSDL as a description language. WSDL documents can be organized as service implementation and service interface documents. The service implementation document maps to the UDDI businessService element, while the service interface document maps to the tModel elements. The first step in publishing a WSDL description in a UDDI registry is publishing the service interface as a tModel in registry.

The UDDI registry is not only run by Microsoft, but also IBM and Ariba are controlling repositories as well. This means that if we post information with one, then it is replicated on all databases held by those companies. Each of the independent repositories have the same interface to give any outside organization or individual the opportunity to post information to UDDI using the UDDI Publish Web Service and search UDDI using the UDDI Inquire Web Service.

Once the tModel has been published, businesses that wish to provide services as defined in the tModel implement Web Services based on the WSDL definition. These services are accessible through a URL, and businesses then publish this information in the UDDI registry – the service URL is contained in the Binding Template of the service that the business publishes.

The Publishing allows several activities such as registration of new business and services, deletion of existing services of businesses and management of security on the business data. The Publisher API is intended for software programmers or Independent Software Vendors who would like to publish their web services to a UDDI node.

UDDI data structure has two functions that they named save and delete. These functions from the Publication API that allows the user to modify existing entries in the registry and create new ones by using the save functions. The delete functions completely remove the given data structure. This API is used primarily as a means of creating and updating business and service information that the end user is authorized to modify.

Although the UDDI registry allows you to access all the functionally of publishing and finding services programmatically through XML and SOAP, IBM and Microsoft supply a Web Interface as well so that you can access all this functionality from a Web browser.

This article only intends to give you a sophisticated overview of Web Services, WSDL, UDDI and related objects to help you to understand future articles. You learned how to work a Web Service, you also received a high-level overview of Web Services Architecture, and you have had some introductory information about describing and publishing Web Services.

First appeared at MSDN Academic Alliance. Reprinted with the author’s permission.

John Godel is a computer/software engineer in Massachusetts. He enjoys chess and basketball, and he “loves the programming world.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles