Posts Tagged 'WS*'


After innumerous, heated discussions about REST versus SOAP at my workplace, I finally decided to dedicate some time to writing something about it. I began by trying to understand the reason for such passion from both parties of this discussion, and came to the conclusion that we approached the subject from different views; the developer view, and the architect view. Allow me to explain my reasoning.

 Developers, often, become acquainted with a technology and end up wanting to use it everywhere as if it were the best thing since sliced bread. Architects, on the other end, have to ponder upon all aspects of the technology, understand its essence, and decide whether or not it is applicable to the task at hand. I would say, pushing it a little to the extreme, that developers love REST while architects have a tendency to not like it so much. To an architect, REST seems like anarchy, as if,  “I don´t want any standards that I can´t understand, I want to do things my way!”. The discussion of REST services versus SOAP services falls in this category, and I will try to contribute to clarifying when to use one or the other.


I am not going to give yet another definition of REST; there are plenty on the internet to suit everyone’s taste. Nonetheless, I will have to list its main characteristics:

  • Use of the HTTP protocol and its verbs (e.g., GET, PUT, POST or DELETE)
  • Address specific resources through URIs (Universal Resource Identifiers)

More important than the definition of REST and its characteristics (which are very important and should be well understood) is the approach to take.


Imagine a banking scenario with a database of customer accounts. To give customers access to their accounts, the IT department decided to expose Web Services but doesn’t know yet which is the best solution; REST or SOAP.

The first step is to identify the resources. In this particular case, the resources are customer bank accounts.

Bank Accounts

The second step is to establish an addressing convention that, on one end, uniquely identifies each of the unique accounts, and on the other, all the accounts as a whole.

Resource URIs

So far so good, the REST approach seems to apply without breaking its definition. The next step is to expose the functionality to the customers through HTTP verbs. Let´s start with the GetAccountBalance operation.

REST Scenario

This operation maps perfectly to the GET HTTP verb as suggested above. Same thing happens to the DepositAccount operation through the use of the PUT HTTP verb, and the CloseAccount operation with the DELETE HTTP verb. The troublesome part comes with the AccountTransfer operation. First of all, this operation does not map directly to an HTTP verb, second, it addresses two different resources, the uri for the withdrawal account and the uri for the deposit account. To accomplish this with the REST approach, I would have to consider, for instance, the POST HTTP verb, and pass in the HTTP payload the identifier of one of the resources (withdrawal or deposit account). This is a violation of the REST principles; first standard HTTP verbs do not map to this kind of operations, and second, we cannot uniquely identify all the resources we are trying to address through the HTTP URI field.

Following a SOAP approach, the AccountTransfer operation could be implemented in the following manner:

SOAP Scenario


While REST is a good approach for some situations (typically CRUD scenarios), other applications need a more flexible approach such as the use of SOAP. REST is more oriented for storage based applications such as Amazon´s S3, Windows Azure Storage Services, VMWare vCloud, and globally most Cloud Computing Services out there.  SOAP, on the other hand, is more oriented for operations where logic is exposed instead of resources. Typically, SOAP is used in SOA implementations (as opposed to WOA with REST) where there is greater need for standards such as WS* (security, interoperability, discoverability, reliable messaging, transactions, etc.). In terms of security, and opposed to the WS*, which have a very well defined and standard security model, REST does not have any predefined security methods (often relying solely on HTTPS), leaving it to the developers to define their own.

The Azure Platform and Usage Scenarios

We have all seen the picture below showing the main blocks of the Azure Platform. But how, exactly, do all of these services work together to provide us a cloud solution? As I usually say, Microsoft is not extremely good at inventing new things, but is extremely good at turning a particular idea into a great product which really makes sense to use. With Azure, Microsoft took all its great products and frameworks and allowed us to use them on a cloud environment in a way very similar to the one we usually use on our on-premises environment. That´s not quite true; actually, they made it even easier.

Azure Services Platform

Usually what we get from a cloud computing solution is an abstraction from the underlying hardware and software where our app is going to run on. Some providers even offer other services such as database management systems, but few go as far as providing content management services, CRM services, mesh services, access control services, SQL reporting and analysis services, a service bus, workflow services, etc.

The picture above shows the three main blocks of the Azure Platform, the operating system, the services, and some client portals. Additionally, we get a development environment that allows us to use our favourite programming language. Of course, you will not get an ORACLE database or an Apache Web Server but for the most part you don´t even need to know it is an SQL Server or an IIS under the covers. Zooming in, we get to see the functionalities addressed within each of these main blocks.

Azure Services Platform Details

Within the Windows Azure, the operating system for the cloud, we have two main services; management services, also known as Fabric Controller, that takes care of all the virtualization, deployment, scaling, logging, tracing, failure recovery, etc., and the storage system that provide us with a simple way to keep our data in blobs, tables (not SQL tables), and queues. The technologies that we can use to reach all of these functionalities are various, REST, JSON, HTTP, SOAP, etc. To host our apps and services, we have IIS7 and the Framework.NET 3.5 that allows us to expose our services any way we want, through the less standard REST, to the more standard WS*. Whoever is familiar with WCF will naturally take advantage of a new set of bindings that allow your services to be exposed through the new Service Bus on a direct or publisher/subscriber fashion.

The services layer provides Live Services, from mesh services that allow you to share and synchronize folders and files, to Identity and Directory Services to manage access to resources and applications. The .NET Services consist of Access Control Services, a Service Bus Service, and a Workflow Service. The Access Control Services , built using the “Geneva” Framework, is basically an STS (Security Token Service) that provides a claims based identity model, along with federation capabilities through WS* standards, that provides authentication and authorization services to anyone trying to access the services layer. The Service Bus, formerly BizTalk Services (I´m glad they changed the name), basically provides publish/subscribe functionality for calling services, as well as location unawareness between the service and the service consumer. The Workflow Service provides service orchestration and integration with the Service Bus and Access Control Service to provide more complex functionalities. It also provides all the functionalities that you can find on Workflow Foundation, like support for long running workflows, workflow designer, etc. The SQL Services provide typical data, reporting and analysis services.

In broader terms, the Azure Platform provides the services shown below:

Azure Platform

We are, now, going to take a quick look at some usage scenarios; how exactly can all of these services and functionalities work together to compose complex applications, processes and services. Some of these I have tried myself and will be posting shortly some practical examples of it.


This is a simple use of the Storage environment for applications that require merely a way to keep their data in a persistent store. Microsoft provides Tables, Queues and Blobs, and is working on new ways to store your data, namely File Streams, Locks, and Caches. Tables allow you to store data in a similar way to a DBMS, but, in fact, there is no SQL Server involved. Queues allow you to temporarily store data for processing and are a good way to relay data from one service to another, as we will see. Blobs are more oriented to store unstructured data such as different file formats.

Web Role Example

In this scenario, we want to deploy to the cloud an ASP.NET Web Application that possibly uses some storage to keep some of its data. For this, we use the Hosted Services capability, namely, a Web Role to host the Web App.

Web and Worker Role Example

In this example, we are extending the previous example to use a Worker Role to do some background asynchronous processing. A way to relay the data to be processed is to send it through a Queue. The web application posts the data to be processed in the queue and the worker role is periodically checking for data to be processed. Web Roles are, basically, web applications or services hosted in the IIS. Worker Roles are NT services with a specific interface similar to the SCM (Service Control Manager) that are constantly running and looking for something to do.

ESB Services Example

As explained in the previous scenario, Worker Roles are constantly running, looking for, or waiting for something to do. Worker Roles can access the Storage or call external services through the Service Bus, to collect data, send data, or simply notify an external service of some event.

Worker role Example

In fact, Worker Roles can call any service within the cloud.

Worker Role Example

Your applications, services, or any other processes can call into any of the services provided by Azure directly to enrich their functionality. The .NET Services, on their own, can then call other services and/or interact with the Storage system. Through the Storage system we can trigger worker processes to do some asynchronous work for us.

ESB Services Example

The Service Bus is a powerful and useful service that basically acts as mediator between consumers and services. This mediation can be accomplished in two ways; one that allows direct calls from a consumer to a service, another in a publish/subscribe fashion. In both cases, there is no knowledge of the location of the service; the consumer addresses the Service Bus unaware of the service location. This addressing is accomplished through a URL of type sb:// that both service and consumer use to register themselves on the Service Bus. The service must be registered and active on the Service Bus in order for the call from the consumer to get to it. WCF provides a new set of bindings that allow you to address the Service Bus, BasicHttpRelayBinding, WSHttpRelayBinding, NetTcpRelayBinding, etc.

ESB Services Example

As mentioned, the Service Bus allows a publish/subscriber mechanism for service invocation. This allows one call from a consumer to reach several services that expose the same interface. To work with this configuration, the service contracts should not return values. WCF provides a particular binding for this configuration, NetEventRelayBinding.

ESB and Workflow Services Example

WCF also provides context bindings, WSHttpRelayContextBinding and NetTcpRelayContextBinding to be used for WCF-WF integration. These bindings allow for WF Receive Activities (Web Services exposed directly from WF workflows) to receive contextual calls, i.e., there is an extra SOAP Header (instanceID) with a reference to the persisted workflow. Those of you familiar with the WF-WCF integration will easily understand the importance of these two bindings.

ESB Services Example

As we have seen the Service Bus can call any other services available in the cloud and outside it. Worker Roles can also implement services and register them on the Service Bus thus allowing external apps to call them.

Access Control Service Example

Every call to the services is validated against the Access Control Service. The Access Control Service is actually an STS (Security Token Service) that intercepts all calls authenticating the caller of the service and returning a number of claims used to allow the service to authorize the call. Now, this is a complex topic on its own which I will write about on a later entry in this blog. For now, I just wanted to give an idea on how this service is used.

Access Control Service Example

Same thing accessing the web apps in the Web Role hosted services or the storage.

There are a number of possibilities, just use your imagination (and some best practices), and you can, basically, mix and match these services according to your needs. You can build complex processes, applications and services using these building blocks without having to worry about setting up the infrastructure that supports it. The benefits are obvious, and I believe that in the long term this is where IT is heading.

Hosting a WCF Service in Windows Azure

When I first tried to create and deploy a WCF Web Service into the cloud I faced several constraints, some derived from my inexperience with the Azure Platform, some due to the fact that this is still a fairly recent technology from Microsoft, a CTP after all. In the next few paragraphs I will walk you through the steps to create and deploy a WCF service exposed with the WsHttpBinding.

There are a few prerequisites that need to be met in order to proceed with Azure development. To setup the proper development environment one needs to have:

Windows Vista or Windows 2008
Visual Studio 2008 + SP1
Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio
Access to Azure Services Developer Portal

Now, lets start by creating a new project in Visual Studio of type “Web Cloud Service”


Leave the configuration and definition files as they were created by Visual Studio. Note that the CTP access permits only one instance, i.e., only one virtual machine, do not change this setting, you can play with it only on the local development Fabric.

Even though all we want is to create and delpoy a WCF Service, leave the “default.aspx” page merely as a faster way to verify that the package was properly deployed. For that just add a label to the page with some text.


Now add a WCF Service to the project as follows


Alter the service contract to something a little more demo friendlier like

public interface IService
string Echo(stringmsg);

public class Service : IService
public string Echo(stringmsg)
return “Echo: “+ msg;

Also alter the configuration file (web.config) specifying the security policy for your binding

binding name =wsConfig>
security mode =None />
service behaviorConfiguration=MyCloudService_WebRole.ServiceBehaviorname=MyCloudService_WebRole.Service>
endpoint address=“” binding=wsHttpBinding contract=MyCloudService_WebRole.IServicebindingConfiguration=wsConfig>
endpoint address=mex binding=mexHttpBinding contract=IMetadataExchange />

Test your web application and service locally right-clicking the default.aspx page and selecting “View in Browser”.


You used the ASP.NET Development Server for this test. If you use the local Azure Development Fabric you will get the following error when you test your service. This appears to be a bug because you do not get the same error once you deploy to the real cloud.


Speaking of deployment, right-click on the MyCloudService project and select “Publish”. Once you select the “Publish” option you should see a browser open on your Azure project as shown bellow, as well as an explorer window opened with the configuration and definition files. Press the “Deploy…” button and follow the instructions.


Press the “Run” button to test you web app and service, this will take several minutes while starting your VM.


To test your app simply press the temporary DNS name provided and you should get something similar to


Now, change the URL to address the Web Service and you should get


Notice that the URL for the WSDL provided by Azure is an internal URL which is not resolved, this has been reported as a bug and will be fixed. To view your WSDL simply change the URL at the browser to


Now, promote your project to the production environment



This should be quite fast since it is only changing the DNS with which your app is exposed.

Our test would not be completed without building a client that actually called the service, so let´s do it. Since the provided WSDL on the cloud has references to URLs that are not resolved from the client the best way to build the client is to run the service locally with the “ASP.NET Development Server”. For that simply double-click the “ASP.NET Development Server”


And browse to the WSDL


Then add a console application to the solution as follows


Reference the Web Service to create the proxy


And add the following code to the main function

static void Main(string[] args)
ServiceClient proxy = new ServiceReference1.ServiceClient();
Console.WriteLine(proxy.Echo(“Hello Cloud World!”));

First, test it locally, then change the address in the configuration file to the one in the cloud

<endpoint address= binding=wsHttpBindingbindingConfiguration=WSHttpBinding_IService contract=ServiceReference1.IServicename=WSHttpBinding_IService>

Compile it and run it against the cloud. You should get an exception as follows

“The message with To ‘; cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver’s EndpointAddresses agree.”

This is due to a verification made by the default “EndpointAddressMessageFilter” that detects a mismatch between both addresses. The cause of this may be related to the virtualization of the service address probably related to the internal assigned address ““. The following code was retrieved with the Reflector and shows the logic behind the “Match” function.

public override bool Match(Message message)
if (message == null)
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgumentNull(“message”);
Uri to = message.Headers.To;
Uriuri = this.address.Uri;
return (((to != null) && this.comparer.Equals(uri, to)) && this.helper.Match(message));

Fortunatelly, there is a behavior to resolve this problem, add it to the ServiceHost as shown bellow, recompile and redeploy the service to the cloud

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
public class Service : IService
public string Echo(stringmsg)
return “Echo: “+ msg;

Run the client console application again and this time you should get a response back from your cloud service.