A REST Tale

After innumerous, heated discussions about REST versus SOAP at my workplace, I finally decided to dedicate some time to writing something about it. I began by trying to understand the reason for such passion from both parties of this discussion, and came to the conclusion that we approached the subject from different views; the developer view, and the architect view. Allow me to explain my reasoning.

 Developers, often, become acquainted with a technology and end up wanting to use it everywhere as if it were the best thing since sliced bread. Architects, on the other end, have to ponder upon all aspects of the technology, understand its essence, and decide whether or not it is applicable to the task at hand. I would say, pushing it a little to the extreme, that developers love REST while architects have a tendency to not like it so much. To an architect, REST seems like anarchy, as if,  “I don´t want any standards that I can´t understand, I want to do things my way!”. The discussion of REST services versus SOAP services falls in this category, and I will try to contribute to clarifying when to use one or the other.

Rest

I am not going to give yet another definition of REST; there are plenty on the internet to suit everyone’s taste. Nonetheless, I will have to list its main characteristics:

  • Use of the HTTP protocol and its verbs (e.g., GET, PUT, POST or DELETE)
  • Address specific resources through URIs (Universal Resource Identifiers)

More important than the definition of REST and its characteristics (which are very important and should be well understood) is the approach to take.

Scenario

Imagine a banking scenario with a database of customer accounts. To give customers access to their accounts, the IT department decided to expose Web Services but doesn’t know yet which is the best solution; REST or SOAP.

The first step is to identify the resources. In this particular case, the resources are customer bank accounts.

Bank Accounts

The second step is to establish an addressing convention that, on one end, uniquely identifies each of the unique accounts, and on the other, all the accounts as a whole.

Resource URIs

So far so good, the REST approach seems to apply without breaking its definition. The next step is to expose the functionality to the customers through HTTP verbs. Let´s start with the GetAccountBalance operation.

REST Scenario

This operation maps perfectly to the GET HTTP verb as suggested above. Same thing happens to the DepositAccount operation through the use of the PUT HTTP verb, and the CloseAccount operation with the DELETE HTTP verb. The troublesome part comes with the AccountTransfer operation. First of all, this operation does not map directly to an HTTP verb, second, it addresses two different resources, the uri for the withdrawal account and the uri for the deposit account. To accomplish this with the REST approach, I would have to consider, for instance, the POST HTTP verb, and pass in the HTTP payload the identifier of one of the resources (withdrawal or deposit account). This is a violation of the REST principles; first standard HTTP verbs do not map to this kind of operations, and second, we cannot uniquely identify all the resources we are trying to address through the HTTP URI field.

Following a SOAP approach, the AccountTransfer operation could be implemented in the following manner:

SOAP Scenario

Conclusion

While REST is a good approach for some situations (typically CRUD scenarios), other applications need a more flexible approach such as the use of SOAP. REST is more oriented for storage based applications such as Amazon´s S3, Windows Azure Storage Services, VMWare vCloud, and globally most Cloud Computing Services out there.  SOAP, on the other hand, is more oriented for operations where logic is exposed instead of resources. Typically, SOAP is used in SOA implementations (as opposed to WOA with REST) where there is greater need for standards such as WS* (security, interoperability, discoverability, reliable messaging, transactions, etc.). In terms of security, and opposed to the WS*, which have a very well defined and standard security model, REST does not have any predefined security methods (often relying solely on HTTPS), leaving it to the developers to define their own.

Cloud Computing: The New IT Paradigm

Much has been said about the new concept of Cloud Computing. There are a myriad of definitions and just as many companies claiming to have a Cloud Computing solution. What is really Cloud Computing, and which solutions will offer you what this new paradigm claims to deliver, are the questions most want to see answered. For starters, let me just state what is obvious for the most experienced who have seen this before, the new thing about Cloud Computing is, its name. During the rest of this entry I´ll walk you through the evolution of a few concepts that lead us to today’s so called Cloud Computing paradigm. Without further ado, let’s dive right into it.

The Industrialization of IT

Information technology has always been about turning computerized systems into a way of getting tasks done faster and in a more reliable fashion. In the last couple of decades this journey has bumped up a notch with the introduction of the object oriented programming, component based software, service oriented architectures (SOA), business process management (BPM) technologies, the internet and its technologies (Web 2.0), and so on. The last step on this long list of technologies, paradigms, and concepts is Cloud Computing. Leveraging on technologies such as virtualization, SOA, Web 2.0, grid computing, etc., Cloud Computing promises greater rates of industrialization of the IT. Making things happen faster, more reliably, and easier to manage is still the main goal of IT today.

Economy of Scale

With the build-up of the industrialization of IT, one inevitable outcome is the appearance of a new economy of scale that will allow IT providers to deliver services cheaper, making IT more like a commodity and less as a burden. Businesses can look at IT more and more as operational costs (OPEX), rather than capital expenditure (CAPEX) which makes a lot more sense for most.

Definition of Cloud Computing

What is Cloud computing after all? There are innumerous definitions of Cloud Computing and also huge disagreements about what it really is and means. So, I´ll try to give you an idea of what it is, hopefully without contributing further to the confusion. In my opinion, one of the reasons why there is a lot of confusionis because there is great mix-up of the concepts, namely technical ande purely conceptual.

                Conceptual Definition

According to NIST, National Institute of Standards and Technology, Cloud Computing is:

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

This is a good conceptual definition of Cloud Computing that touches its main characteristics, i.e., on-demand self-service, broadband network access, rapid elasticity, resource pooled, measured services.

Today, it is more or less accepted that there are three Cloud Computing models depending on the type of service provided, IaaS, Infrastructure as a Service, PaaS, Platform as a Service, and SaaS, Software as a Service.

                                IaaS – Infrastructure as a Service

Infrastructure as a Service provides infrastructure capabilities like processing, storage, networking, security, and other resources which allow consumers to deploy their applications and data. This is the lowest level provided by the Cloud Computing paradigm. Some examples of IaaS are: Amazon S3/EC2, Microsoft Windows Azure, and VMWare vCloud.

                                PaaS – Platform as a Service

Platform as a Service provides application infrastructure such as programming languages, database management systems, web servers, applications servers, etc. that allow applications to run. The consumer does not manage the underlying platform including, networking, operating system, storage, etc. Some examples of PaaS are: Google App Engine, Microsoft Azure Services Platform, and ORACLE/AWS.

                                SaaS – Software as a Service

Software as a Service is the most sophisticated model hiding all the underlying details of networking, storage, operating system, database management systems, application servers, etc. from the consumer. It provides the consumers end-user software applications most commonly through a web browser (but could also be though a rich client). Some examples of SaaS are: Salesforce CRM, Oracle CRM On Demand, Microsoft Online Services, and Google Apps.

In reality, there are a number of other models emerging that for some analysts will have a classification of their own, not falling within the models just described. Some examples of these are:

                                AIaaS – Application Infrastructure as a Service

Some analysts consider this model to provide application middleware, including applications servers, ESB, and BPM (Business Process Management).

                                APaaS – Application Platform as a Service

Provides application servers with added multitenant elasticity as a service. The model PaaS (Platform as a Service) mentioned before includes AIaaS and APaaS.

                                DaaS – Desktop as a Service

Based on application streaming and virtualization technology, provides desktop standardization, pay-per-use, management, and security.

                                BPaaS – Business Process as a Service

Provides business processes such as billing, contract management, payroll, HR, advertising, etc. as a service.

                                CaaS – Communications as a Service

Management of hardware and software required for delivering voice over IP, instant messaging, video conferencing, for both fixed and mobile devices

                                NaaS – Network as a Service

It allows telecommunication operators to provide network communications, billing, and intelligent features as services to consumers.

                                XaaS – Everything as a Service

Broad term that embraces all the models discussed here.

Technical Definition

Trouble usually starts when one tries to add technical concepts to the definition of a paradigm that is, conceptually, above technology. This confusion usually starts with the introduction of virtualization, web 2.0, grid computing, and so on and so forth. This reminds me of innumerous discussions with  fellow colleagues, about SOA and Web Services; the former, the concept and the latter, a technology that best applies it. Undoubtedly, virtualization, grid computing, web 2.0, SOA, WOA, etc., are the technology trends that will, for now, fuel the Cloud Computing initiative, but these are ephemerons, and the same concept remains regardless of technology changes.

                Cloud Types

In terms of implementation, there are three major types of cloud deployments; internal clouds, private clouds, and public clouds.

Cloud Types

                               Private Clouds

Private clouds (aka, on-premises cloud) are cloud deployments inside the organization’s premises, managed internally without the benefits of the economy of scale but with advantages in terms of security. This is becoming a new form of architecture for the Datacenter, sometimes mentioned as a Datacenter-in-a-box. VMWare is pioneering this approach, delivering products that will help to implement this type of cloud through their products vCloud, vCenter, and vSphere. VMWare is also leading an effort to achieve standardization for the cloud through the DMFT (Distributed Management Task Force) organization.

                               Public Clouds

Public Clouds are the original concept of cloud computing based on the ubiquity of the internet. This type of cloud provides all the benefits of the economy of scale, ease of management, and ever growing elasticity. The major concern about this style of deployment is security, and that is the only reason why the other types of cloud deployment have a say.

                                Hybrid Clouds

Hybrid Clouds are a deployment type that sits between the private and the public clouds. Hybrid Clouds are usually a combination of private clouds and public clouds, usually, managed using the same administration and monitoring consoles (therefore, the importance of cloud standardization). 

Conclusion

Much more than the technology that supports it, Cloud Computing is the last plateau of evolution of the IT industrialization process. Looking back at the recent years of the IT industry, it was predictable that something like Cloud Computing would come to revolutionize the IT industry. It seems that for a while, the “tecky” people took over the IT business, always eager to try new technologies, often with little value for the business they were trying to support. Now business is back to claim  added value from the IT departments, and Cloud Computing may very well be the answer.

All Roads Lead to Rome – Towards the Clouds

Cloud Computing is growing, and every solution provider wants to be part of the hype. This new trend promises to abstract IT professionals from the underlying nuts and bolts of server virtualization, storage allocation, scalability, availability, and operational overhead. It also aims to deliver on-demand, self-servicing capacity to reliably run applications through a simple administration console that allows you to monitor service levels and react accordingly. In a nut shell, this is the main idea of Cloud Computing.

Every company has realized the potential of this new idea, and they are all rushing to provide the most comprehensive solution. Cloud solutions come in many forms. Some providers even cover all the different flavors, being IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service) the most common. So, what we see is, companies traditionally from the application infra-structure arena starting to incorporate virtualization solutions into their platforms and deliver Cloud Platforms in one or all of its forms. We also see companies, traditionally from the virtualization solution market, providing better management solutions, and partnering with application solution providers to deliver their Cloud Platforms, in one or all of its forms.

Another question that may arise is, if you are out in the market to purchase one of these solutions, which one best serves your interests!? Hard to tell, but I would say that it all depends on your particular requirements. Companies like VMWare and Citrix have strong virtualization products. VMWare, in particular, has partnered with a few other companies to provide virtual appliances with operating systems, database management systems, application servers, CRM, ERP, collaboration and communications solutions, etc. On the other side of the spectrum, companies like ORACLE, Microsoft, and IBM, also provide excellent solutions based on their application infrastructure and own virtualization solution. ORACLE, Microsoft, and IBM are extremely good in application infrastructure and are gaining expertise in the virtualization field. ORACLE has its virtualization solution based on the Xen open source solution, and is in the process of acquiring Virtual Iron and SUN Microsystems both of which have virtualization solutions based on Xen. Microsoft has its own Hyper-V solution. All of them also support VMWare virtualization solutions.

Bottom line, every new reference enterprise solution in the market is aiming at the “clouds”, providing a new form of DataCenter architecture based on simplicity and ease of use. Computer systems often evolve towards the use of standards achieving complex architectures based on building blocks made of well-known and reliable technologies. We have seen it in the past from objects, to components, to services, to composite applications, from virtualization, to RTI, to Cloud Computing, and will continue to see more and more levels of abstraction.

Virtualization and Cloud Computing

The most recent solutions from Microsoft and VMWare incorporate Cloud principles into their virtualization solutions. This is a predictable move since one of the major features of Cloud solutions is their elasticity which is accomplished heavily through virtualization. Cloud Computing solutions provide sophisticated administration tools, as well as, service level monitoring tools. Consequently,for visionary companies like Microsoft and VMWare ,extending these tools to their virtualization offerings is expectable. Additionally, VMWare extended its product to allow On-Premises installation, allowing customers to mitigate most of their security issues raised by an Off-Premises Cloud solution. Microsoft´s position on this isn´t yet clear.

The following diagram shows the architecture of Microsoft and VMWare solutions:

 VirtualizationCloudComputing1

Microsoft´s solution,  taking the company’s background in consideration, targets the developer community, providing an integrated PaaS solution, including application services, database services, access control services, etc.

VMWare, on the other hand, relies on established partnerships to provide production ready application services through its Virtual Appliance Program.

The following diagram shows the use of VMWare´s solution on a Cloud environment:

 VMWare Cloud Architecture

The following diagram shows the use of Microsoft´s solution on a Cloud environment. Notice that Microsoft has not yet decided to provide an On-Premises Cloud solution, hence the grey cloud:

 Microsoft Cloud Architecture

VMWare has a credible solution to implement DataCenter setups On-Premises. The downside of VMWare´s solution, is that it only provides an IaaS solution which is expectable from a company that dedicates itself to virtualization solutions. Nevertheless, VMWare has established several partnerships with various application software providers to deliver Virtual Appliances that provide application engines (e.g., WebSphere), database engines (e.g., ORACLE), etc. Microsoft´s Cloud solution is more complete, in the sense that, it provides a fully loaded PaaS solution.

 Since I have dedicated some entries on this blog to Microsoft´s Cloud Solution, I will now concentrate on VMWare´s solution. The evolution of VMWare´s offerings has gone from a pure virtualization solution to a virtual DataCenter solution, with sophisticated administration tools, to a Cloud IaaS oriented operating system.

 VDC-OS and Cloud-OS

With its vCloud solution, VMWare delivers a Cloud operating system for Cloud providers. Many of these companies have already implemented VMWare´s vCloud solution, and this is expected to continue. VMWare is also targeting On-Premises installations, and taking advantage of the fact that Microsoft hasn´t taken a stand on this kind of installation.

Virtualization Suites

After my incursion in the virtualization intrinsics, I decided to look at three of the most popular hypervisor solutions on the market, Microsoft´s Hyper-V, VMWare´s ESX, and Citrix’s XenServer.  These are some of the findings:

Supported host operating systems:

Microsoft Hyper-V VMWare ESX/ESXi Citrix XenServer
Windows 2008 (64-bit machines with AMD-V or Intel-VT enabled processors) Linux (ESX) / Posix (ESXi) (64-bit x86 machines with or without  AMD-V or Intel-VT enabled processors) Linux (64-bit x86 machines. Requires AMD-V or Intel-VT enabled processors for Windows guests support)

As far as guest operating systems are concerned:

Microsoft Hyper-V VMWare ESX/ESXi Citrix XenServer
Windows Server 2000 Windows Server 2000 Windows Server 2000
Windows Server 2003 (x86, x64) Windows Server 2003 (x86, x64) Windows Server 2003 (x86, x64)
Windows Server 2008 (x86, x64) Windows Server 2008 (x86, x64) Windows Server 2008 (x86, x64)
Linux Linux Linux
Windows XP (x86, x64) Windows XP (x86, x64) Windows XP (x86, x64)
Windows Vista (x86, x64) Windows Vista (x86, x64) Windows Vista (x86, x64)
Windows 7 (x86, x64) Windows 7 (x86, x64)  
  NetWare  
  FreeBSD  
  Solaris  

So, how do all of these technologies fit in the solution market? Basically, solution wise there are two main different areas of virtualization, server virtualization, and desktop virtualization.  Both of these provide increasingly more sophisticated administration tools that allow for effortless virtual machine allocation and service level monitoring.

The following table presents the suite of products provided by these three major players:

  Citrix Microsoft VMWare
Virtual desktop Citrix XenDesktop  Microsoft Virtual PC VMWare Workstation
Desktop Streaming Citrix XenDesktop Microsoft RDS (Remote Desktop Services, formerly, Terminal Services) VMWare View (VMWare VDI)
Desktop Distribution Citrix XenDesktop Microsoft MED-V (Microsoft Enterprise Desktop, based on Virtual PC) VMWare View (VMWare VDI)
Application delivery Citrix XenApp Microsoft App-V VMWare ThinApp
Hypervisor Citrix XenServer Microsoft Hyper-V VMWare ESX/ESXi
Cloud OS Citrix Cloud Center C3(Citrix XenServer, Citrix XenApp, Citrix XenDesktop) Windows Azure VMWare vSphere (vCloud)

Virtualization, as we know it, is ending. The technologies behind it are growing strong and will continue to do so, but Cloud Computing is forcing virtualization to be seen as IaaS (Infrastructure as a Service). Nobody really cares about dealing with virtualization´s nuts and bolts; companies demand ease of use. Increasingly more sophisticated solutions are emerging for both external and internal Clouds that will push us to redesign our DataCenters, and see it as a self-service infrastructure. More on this very shortly.

Virtualization Basics

Virtualization is not a new concept, but its complexity has been growing, and a number of new paradigms are rising. I will try to demystify some of the concepts behind virtualization, briefly explain some of its basics, and finally look at some of the products and solutions out there.

To begin, let me introduce three very simple concepts regarding virtualization: the host operating system, the hypervisor, and the guest operating system.

Virtualization Components

The host operating system provides a host to one or more virtual machines (or partitions) and shares physical resources with them. It’s where the virtualization product or the partitioning product is installed.

The guest operating system is the operating system installed inside a virtual machine (or a partition). In a virtualization solution the guest OS can be completely different from the host OS. In a partitioning solution the guest OS must be identical to the host OS.

A hypervisor, also called a virtual machine manager (VMM), is a program that allows multiple operating systems to share a single hardware host. Each operating system appears to have the host’s processor, memory, and other resources all to itself. The task of this hypervisor is to handle resource and memory allocation for the virtual machines, ensuring they cannot disrupt each other, in addition to providing interfaces for higher level administration and monitoring tools.

The Hypervisor

There are two types of hypervisors as depicted below:

vBasics2

Note: Xen is an open-source virtualization software used by several companies to implement their virtualization solution; companies like, ORACLE, Citrix, Sun, and Virtual Iron, to name a few.

Type 1 hypervisors, also known as bare-metal, are software systems that run directly on the host’s hardware as a hardware control and guest operating system monitor. Bare-metal virtualization is the current enterprise data center leader. VMware ESX is easily the market leader in enterprise virtualization at the moment, and it utilizes bare-metal virtualization architecture. What is immediately apparent about this architecture, is the lack of an existing OS; the hypervisor sits directly on top of the hardware, hence the term “bare-metal virtualization”. The reason so many data centers implement bare-metal products, such as ESX, Xen, and Hyper-V, is because of the speed it provides due to the decreased overhead from the OS that hosted virtualization uses.

vBasics3

Type 2 hypervisors, also known as hosted, are software applications running within a conventional operating system environment. This type of hypervisor is typically used in client side virtualization solutions such as Microsoft´s Virtual PC, and VMWare´s Workstation.

vBasics4

The Protection Rings

Another important concept is the protection rings. x86 CPUs provide a range of protection levels, also known as rings, in which code can execute. Ring 0 has the highest level privilege and is where the operating system kernel normally runs. Code executing in Ring 0 is said to be running in system space, kernel mode or supervisor mode. All other code, such as applications running on the operating system, operate in less privileged rings, typically Ring 3.

vBasics5

The hypervisor runs directly on the hardware of the host system in ring 0. Clearly, with the hypervisor occupying ring 0 of the CPU, the kernels for any guest operating systems running on the system must run in less privileged CPU rings. Unfortunately, most operating system kernels are written explicitly to run in ring 0, for the simple reason that they need to perform tasks that are only available in that ring, such as the ability to execute privileged CPU instructions and directly manipulate memory.

The AMD-V and Intel-VT CPUs use a new privilege level called Ring -1 for the VMM to reside, allowing for better performance as the VMM no longer needs to fool the Guest OS that it is running in Ring 0. Solutions like VMWare ESX, Xen (Citrix, ORACLE, IBM, etc.), and Microsoft Hyper-V take advantage of the hardware virtualization capabilities inherent to the new Intel and AMD CPUs.

Virtualization Landscape

After this brief introduction, let´s now take a look at the global virtualization landscape available out there. The following diagram shows how virtualization architectures are organized, as well as some of the solutions that implement them.

vBasics6

The following sections will briefly introduce some of the most important types of virtualization.

Traditional

This is not a virtualization scenario; it´s here solely for comparison purposes. Here we see that the OS sits directly above the hardware executing in the ring 0.

vBasics7

Paravirtualization

Under paravirtualization, the kernel of the guest operating system is modified specifically to run on the hypervisor. This typically involves replacing any privileged operations that will only run in ring 0 of the CPU with calls to the hypervisor (known as hypercalls). The hypervisor in turn performs the task on behalf of the guest kernel.

This typically limits support to open source operating systems, such as Linux, which may be freely altered, and proprietary operating systems where the owners have agreed to make the necessary code modifications to target a specific hypervisor. This results in the ability of the guest kernel to communicate directly with the hypervisor, resulting in greater performance levels than other virtualization approaches.

vBasics8

Full Virtualization without Hardware Assist

Full virtualization provides support for unmodified guest operating systems. The term unmodified refers to operating system kernels which have not been altered to run on a hypervisor and, therefore, still execute privileged operations as though running in ring 0 of the CPU.

In this scenario, the hypervisor provides CPU emulation to handle and modify privileged and protected CPU operations made by unmodified guest operating system kernels. Unfortunately, this emulation process requires both time and system resources to operate, resulting in inferior performance levels when compared to those provided by paravirtualization.

vBasics9

Full Virtualization with Hardware Assist

Hardware virtualization leverages virtualization features built into the latest generations of CPUs from both Intel and AMD. These technologies, known as Intel VT and AMD-V, respectively, provide extensions necessary to run unmodified guest virtual machines without the overheads inherent in full virtualization CPU emulation.

In very simplistic terms, these new processors provide an additional privilege mode below ring 0 in which the hypervisor can operate essentially, leaving ring 0 available for unmodified guest operating systems.

vBasics10

OS virtualization

Compared with hypervisor based virtualization, container based virtualization offers a completely different approach to virtualization. Instead of virtualizing with a system in which there is a complete operating system installation, container based virtualization isolates containers work from within a single OS. In cases where only one operating system is needed, the main benefits of container based virtualization are that it doesn’t duplicate functionality and improves performance.

OS virtualization has been making waves lately because Microsoft is rumored to be in the market for an OS virtualization technology. The most well-known products that use OS virtualization are Parallels Virtuozzo and Solaris Containers. This virtualization architecture has many benefits, speedy performance being the foremost. Another benefit is reduced disk space requirements. Many containers can use the same files, resulting in lowered disk space requirements.

The big caveat with OS virtualization is the OS requirement. Container OSs must be the same OS as the host OS. This means that if you are utilizing Solaris containers then all containers must run Solaris. If you are implementing Virtuozzo containers on Windows 2003 Standard Edition, then all its containers must also be running Windows 2003 Standard Edition.

vBasics11

Hosted virtualization

This is the type of virtualization with which most users are familiar with. All of the desktop virtualization products, such as VMware Workstation, VMware Fusion, and Parallels Desktop for the Mac, and Microsoft Virtual PC implement hosted virtualization architecture. There are many benefits to this type of virtualization. Users can install a virtualization product onto their desktop just as any other application, and continue to use their desktop OS. Hosted virtualization products also take advantage of the host OS’s device drivers, resulting in the virtualization product supporting whatever hardware the host does.

vBasics12

Conclusion

As concepts evolve, it is often difficult to get a clear definition of the basics behind them, and virtualization is no exception to this rule. When I first started looking into virtualization a little more deeply (driven by my Cloud Computing crusade) I found it difficult to find clear information of all its fronts. I hope this blog entry helps the rest of you with the same problem. Furthermore, with the rise of Cloud Computing, new paradigms are emerging, forcing the virtualization solutions to adapt to a new reality; a subject that I will address shortly.

Windows Azure – Service Bus Publish/Subscribe Example

Within the Azure Platform, there is a set of services named .NET Services. These set of services were originally known as BizTalk.NET, and it includes the Workflow Services, the Access Control Services, and the one we will talk about, the Service Bus.

servicebus

The Service Bus implements the familiar Enterprise Service Bus Pattern. In a nutshell, the service bus allows for service location unawareness between the service and its consumer, along with a set of other, rather important, capabilities. The Service Bus allows you to build composite applications based on services that you really do not need to know where they are. They could be in servers inside your company, or on a server on the other side of the world; the location is irrelevant. There are, nevertheless, important things you need to know about the service you are calling, namely, security. The Access Control Service integrates seamlessly with the Service Bus to provide authentication and authorization. The Access Control Service will be addressed in some other entry, for now we are concentrating on the Service Bus.

The following diagrams depict different scenarios where it makes sense to use the Service Bus.

scenario1

scenario2

Depending on the Service Bus location, it can take a slightly different designation. If the Service Bus is installed and working on-premises, it is commonly known as an ESB (Enterprise Service Bus), if it is on the cloud, it takes the designation ISB (Internet Service Bus). It is still not clear what Microsoft´s intentions are regarding an on-premises offering of the Azure Platform. The following diagram shows another possible scenario for using the Service Bus.

Scenario 3

As I mentioned before, there are several other benefits associated with the use of the Service Bus that can be leveraged by the configuration shown in this diagram. For instance, the Service Bus also provides protocol mediation allowing use of non-standard bindings inside the enterprise (e.g., NetTcpBinding), and more standard protocols once a request is forwarded to the cloud (e.g., BasicHttpBinding).

Going back to our example, we are going to setup the publisher/subscriber scenario depicted in the following diagram.

Cloud

Let´s start by building the service. To do so follow the steps:

1) Sign in to the Azure Services Platform Portal at http://portal.ex.azure.microsoft.com/

2) Create a solution in the Azure Services Platform Portal. This solution will create an account issued by the Access Control Service (accesscontrol.windows.net). The Access Control Service creates this account for convenience only, and this is going to be deprecated. The Access Control Service is basically an STS (Security Token Service), there is no intention from Microsoft to build yet another Identity Management System. Although it integrates with Identity Management Systems such as Windows CardSpace, Windows Live Id, Active Directory Federation Services, etc.

3) Create a console application named “ESBServiceConsole”

4) Add a reference to the “System.ServiceModel” assembly

5) Add a reference to the “Microsoft.ServiceBus” assembly. You can find this assembly in the folder “C:\Program Files\Microsoft .NET Services SDK (March 2009 CTP)\Assemblies\Microsoft.ServiceBus.dll”. By the way I am using the March 2009 CTP on this example, you can find it at http://www.microsoft.com/downloads/details.aspx?FamilyID=b44c10e8-425c-417f-af10-3d2839a5a362&displaylang=en

6) Add the following interface to the “program.cs” file

 

[ServiceContract(Name = “IEchoContract”, Namespace = http://azure.samples/”)]

public interface IEchoContract

{

[OperationContract(IsOneWay = true)]

void Echo(string text);

}

 

7) Add the following class to the program “program.cs” file

 

[ServiceBehavior(Name = “EchoService”, Namespace = http://azure.samples/”)]

class EchoService : IEchoContract

{

public void Echo(string text)

{

Console.WriteLine(“Echoing: {0}”, text);

}

}

 

8) Add the following code to the “main” function

 

// since we are using a netEventRelayBinding based endpoint we can set the conectivity protocol, in this case we are setting it to http

ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Http;

 

// read the solution credentials to connect to the Service Bus. this type of credentials are going to be deprecated, they just exist for convenience, in a real scenario one should use CardSpace, Certificates, Live Services Id, etc.

Console.Write(“Your Solution Name: “);

string solutionName = Console.ReadLine();

Console.Write(“Your Solution Password: “);

string solutionPassword = Console.ReadLine();

 

// create the endpoint address in the solution’s namespace

Uri address = ServiceBusEnvironment.CreateServiceUri(“sb”, solutionName, “EchoService”);

 

// create the credentials object for the endpoint

TransportClientEndpointBehavior userNamePasswordServiceBusCredential = new TransportClientEndpointBehavior();

userNamePasswordServiceBusCredential.CredentialType = TransportClientCredentialType.UserNamePassword;

userNamePasswordServiceBusCredential.Credentials.UserName.UserName = solutionName;

userNamePasswordServiceBusCredential.Credentials.UserName.Password = solutionPassword;

 

// create the service host reading the configuration

ServiceHost host = new ServiceHost(typeof(EchoService), address);

 

// add the Service Bus credentials to all endpoints specified in configuration

foreach (ServiceEndpoint endpoint in host.Description.Endpoints)

{

endpoint.Behaviors.Add(userNamePasswordServiceBusCredential);

}

 

// open the service

host.Open();

 

Console.WriteLine(“Service address: “ + address);

Console.WriteLine(“Press [Enter] to exit”);

Console.ReadLine();

 

// close the service

host.Close();

 

Notice that I chose the Tcp protocol as the connectivity mode. In the client side, I will specify the Http protocol. This is to show that protocol mediation can be accomplished with the use of the Service Bus.

9) Add an “app.config” file to the project

10) Add the following configuration to the “app.config” file

 

<system.serviceModel>

<services>

<service name=ESBServiceConsole.EchoService>

<endpoint contract=ESBServiceConsole.IEchoContract

binding=netEventRelayBinding />

</service>

</services>

</system.serviceModel>

 

11) Compile and run the service. Enter the solution credentials, and you should get the following:

servicerun

Now let´s build a client application.

1) Add a console project named “ESBClientConsole” to the solution.

2) Add a reference to the “System.ServiceModel” assembly.

3) Add a reference to the “Microsoft.ServiceBus” assembly.

4) Add the following interface to the “program.cs” file

 

[ServiceContract(Name = “IEchoContract”, Namespace = http://azure.samples/&#8221;)]

public interface IEchoContract

{

[OperationContract(IsOneWay = true)]

void Echo(string text);

}

 

public interface IEchoChannel : IEchoContract, IClientChannel { }

 

5) Add the following code to the “main” function

 

// since we are using a netEventRelayBinding based endpoint we can set the conectivity protocol, in this case we are setting it to http

ServiceBusEnvironment.SystemConnectivity.Mode = ConnectivityMode.Tcp;

 

// read the solution credentials to connect to the Service Bus. this type of credentials are going to be deprecated, they just exist for convenience, in a real scenario one should use CardSpace, Certificates, Live Services Id, etc.

Console.Write(“Your Solution Name: “);

string solutionName = Console.ReadLine();

Console.Write(“Your Solution Password: “);

string solutionPassword = Console.ReadLine();

 

// create the service URI based on the solution name

Uri serviceUri = ServiceBusEnvironment.CreateServiceUri(“sb”, solutionName, “EchoService”);

 

// create the credentials object for the endpoint

TransportClientEndpointBehavior userNamePasswordServiceBusCredential = new TransportClientEndpointBehavior();

userNamePasswordServiceBusCredential.CredentialType = TransportClientCredentialType.UserNamePassword;

userNamePasswordServiceBusCredential.Credentials.UserName.UserName = solutionName;

userNamePasswordServiceBusCredential.Credentials.UserName.Password = solutionPassword;

 

// create the channel factory loading the configuration

ChannelFactory<IEchoChannel> channelFactory = new ChannelFactory<IEchoChannel>(“RelayEndpoint”, new EndpointAddress(serviceUri));

 

// apply the Service Bus credentials

channelFactory.Endpoint.Behaviors.Add(userNamePasswordServiceBusCredential);

 

// create and open the client channel

IEchoChannel channel = channelFactory.CreateChannel();

channel.Open();

 

Console.WriteLine(“Enter text to echo (or [Enter] to exit):”);

string input = Console.ReadLine();

while (input != String.Empty)

{

try

{

channel.Echo(input);

Console.WriteLine(“Done!”);

}

catch (Exception e)

{

Console.WriteLine(“Error: “ + e.Message);

}

input = Console.ReadLine();

}

 

channel.Close();

channelFactory.Close();

 

6) Add an “app.config” file to the project

7) Add the following configuration to the “app.config” file

 

<system.serviceModel>

<client>

<endpoint name=RelayEndpoint

 contract=ESBClientConsole.IEchoContract

binding=netEventRelayBinding/>

</client>

</system.serviceModel>

 

8) Compile the client, run three instances of the service, enter the credentials, run the client and type some text, the result should be as follows.

servicerun2

There you have it, a publish/subscribe example using the Service Bus.

Enterprise Architecture (EA)

The concept of Enterprise Architecture has been around for quite some time, still, not many organizations have embraced it nor identified the human resources (Enterprise Architects) to implement it.

In fact , many organizations have one or more isolated architecture initiatives, typically, a Technology Architecture that addresses the infrastructure (networking, servers, operating systems, DBMS, messaging, etc.), and, an Application Architecture that addresses the development (web, services, db, etc.). Without an Enterprise Architecture to glue all these (and more as we will see) architectures together, there is a risk of overlapping efforts, poor coordination between teams, misalignment between the IT and the business needs, and ultimately a less effective organization.

So, what exactly is an Enterprise Architecture? An Enterprise Architecture is, basically, a “blueprint” to systematically and completely define an organization´s current (baseline) and desired (target) environment. This “blueprint” allows an organization to achieve its goals through optimal performance of its core business processes within an efficient information technology environment. Enterprise architectures are essential for evolving information systems and developing new systems that optimize the organization’s mission value. This is accomplished in logical or business terms (e.g., mission, business functions, information flows, and systems environments) and technical terms (e.g., software, hardware, communications), and includes a Sequencing Plan for transitioning from the baseline environment to the target environment.

EA

Now that we´ve seen that one of the EA main objectives is to align IT investments to the business needs, we are going to dive a little deeper into the concept.

EA Details

  • Organization Architecture – Defines the decision chain, human resources, cultural characteristics, organization topology, competencies, responsibilities, ownerships, etc.
  • Business Architecture – Describes the organization “modus operandi”, describes the business processes and the human resources involved.
  • Information Architecture – Manages the information needed to support the business in an abstract manner, independently of the technology and particular process or application implementations.
  • Application Architecture – Modulates the applications and uses the information to support the business needs in a non abstract manner  
  • Technology Architecture – Manages the infrastructure that supports the communications, applications, and information, needed to run the business
  • Security Architecture – Establishes and controls the security policies at every layer according to the principles, defense-in-depth & secure-by-design.
  • Governance – Monitors and ensures the overall health of the Enterprise Architecture

One of the Enterprise Architecture´s goals is to identify a set of principles, technologies, and products that can or should be used on each module. The following diagram presents an example of technologies and principles that should be used in a particular organization.

 EA Details

Enterprise Architecture is an ever evolving concept. Initially it was, basically, an IT concept but currently contains business and organizational concepts in its core. In most organizations, business drives the IT so this was expected. The following diagram shows the original concept, IT Architecture, and how that concept now fits in on this new model.

EA

In conclusion, an Enterprise Architecture mainly helps you to:

  1. Guarantee the alignment between business needs and IT initiatives. Most organizations are driven by business plans, not IT, so it is extremely important (especially nowadays) that IT investments reflect directly in terms of business gains.
  2. Identify from a myriad of existing technologies, standards, products, and principles, which ones make sense to use in the organization, how to use them, and where to use them.
  3. Define a “blueprint” of the organizations current environment (baseline).
  4. Establishes a set of best practices, guidelines, and roadmaps for the evolution of the overall organization (target)
  5. Avoid ad-hoc implementations that disperse from the necessary global homogeneity
  6. Define the human resources organization and the role each plays in the organization defining their competences, responsibilities, and ownerships.

There are a few EA methodologies available among which, The Zachman Framework for Enterprise Architectures, The Open Group Architectural Framework (TOGAF), The Federal Enterprise Architecture, and The Gartner Methodology, are some of the most popular. What I presented here was a very succinct example of an instantiation of an Enterprise Architecture, one that in my experience makes sense to define. Different organizations may require different approaches. It is, however, in all cases mandatory to identify a set of experienced architects to define and maintain the Enterprise Architecture. These architects are known as Enterprise Architects. They also have a supervision role to ensure that every implementation in the organization is in agreement with the defined EA.

The objective of this article is to give you an overview of the concept of EA and emphasize its importance. It, basically, helps everyone in an organization to focus on one common objective, giving everyone a role in the big scheme of things and providing guidelines for evolution. It is not intended to be an exhaustive description of the subject. There are innumerous articles and documentation on the subject that will allow you to delve deeper into this important concept.

The Azure Platform and Usage Scenarios

We have all seen the picture below showing the main blocks of the Azure Platform. But how, exactly, do all of these services work together to provide us a cloud solution? As I usually say, Microsoft is not extremely good at inventing new things, but is extremely good at turning a particular idea into a great product which really makes sense to use. With Azure, Microsoft took all its great products and frameworks and allowed us to use them on a cloud environment in a way very similar to the one we usually use on our on-premises environment. That´s not quite true; actually, they made it even easier.

Azure Services Platform

Usually what we get from a cloud computing solution is an abstraction from the underlying hardware and software where our app is going to run on. Some providers even offer other services such as database management systems, but few go as far as providing content management services, CRM services, mesh services, access control services, SQL reporting and analysis services, a service bus, workflow services, etc.

The picture above shows the three main blocks of the Azure Platform, the operating system, the services, and some client portals. Additionally, we get a development environment that allows us to use our favourite programming language. Of course, you will not get an ORACLE database or an Apache Web Server but for the most part you don´t even need to know it is an SQL Server or an IIS under the covers. Zooming in, we get to see the functionalities addressed within each of these main blocks.

Azure Services Platform Details

Within the Windows Azure, the operating system for the cloud, we have two main services; management services, also known as Fabric Controller, that takes care of all the virtualization, deployment, scaling, logging, tracing, failure recovery, etc., and the storage system that provide us with a simple way to keep our data in blobs, tables (not SQL tables), and queues. The technologies that we can use to reach all of these functionalities are various, REST, JSON, HTTP, SOAP, etc. To host our apps and services, we have IIS7 and the Framework.NET 3.5 that allows us to expose our services any way we want, through the less standard REST, to the more standard WS*. Whoever is familiar with WCF will naturally take advantage of a new set of bindings that allow your services to be exposed through the new Service Bus on a direct or publisher/subscriber fashion.

The services layer provides Live Services, from mesh services that allow you to share and synchronize folders and files, to Identity and Directory Services to manage access to resources and applications. The .NET Services consist of Access Control Services, a Service Bus Service, and a Workflow Service. The Access Control Services , built using the “Geneva” Framework, is basically an STS (Security Token Service) that provides a claims based identity model, along with federation capabilities through WS* standards, that provides authentication and authorization services to anyone trying to access the services layer. The Service Bus, formerly BizTalk Services (I´m glad they changed the name), basically provides publish/subscribe functionality for calling services, as well as location unawareness between the service and the service consumer. The Workflow Service provides service orchestration and integration with the Service Bus and Access Control Service to provide more complex functionalities. It also provides all the functionalities that you can find on Workflow Foundation, like support for long running workflows, workflow designer, etc. The SQL Services provide typical data, reporting and analysis services.

In broader terms, the Azure Platform provides the services shown below:

Azure Platform

We are, now, going to take a quick look at some usage scenarios; how exactly can all of these services and functionalities work together to compose complex applications, processes and services. Some of these I have tried myself and will be posting shortly some practical examples of it.

Example

This is a simple use of the Storage environment for applications that require merely a way to keep their data in a persistent store. Microsoft provides Tables, Queues and Blobs, and is working on new ways to store your data, namely File Streams, Locks, and Caches. Tables allow you to store data in a similar way to a DBMS, but, in fact, there is no SQL Server involved. Queues allow you to temporarily store data for processing and are a good way to relay data from one service to another, as we will see. Blobs are more oriented to store unstructured data such as different file formats.

Web Role Example

In this scenario, we want to deploy to the cloud an ASP.NET Web Application that possibly uses some storage to keep some of its data. For this, we use the Hosted Services capability, namely, a Web Role to host the Web App.

Web and Worker Role Example

In this example, we are extending the previous example to use a Worker Role to do some background asynchronous processing. A way to relay the data to be processed is to send it through a Queue. The web application posts the data to be processed in the queue and the worker role is periodically checking for data to be processed. Web Roles are, basically, web applications or services hosted in the IIS. Worker Roles are NT services with a specific interface similar to the SCM (Service Control Manager) that are constantly running and looking for something to do.

ESB Services Example

As explained in the previous scenario, Worker Roles are constantly running, looking for, or waiting for something to do. Worker Roles can access the Storage or call external services through the Service Bus, to collect data, send data, or simply notify an external service of some event.

Worker role Example

In fact, Worker Roles can call any service within the cloud.

Worker Role Example

Your applications, services, or any other processes can call into any of the services provided by Azure directly to enrich their functionality. The .NET Services, on their own, can then call other services and/or interact with the Storage system. Through the Storage system we can trigger worker processes to do some asynchronous work for us.

ESB Services Example

The Service Bus is a powerful and useful service that basically acts as mediator between consumers and services. This mediation can be accomplished in two ways; one that allows direct calls from a consumer to a service, another in a publish/subscribe fashion. In both cases, there is no knowledge of the location of the service; the consumer addresses the Service Bus unaware of the service location. This addressing is accomplished through a URL of type sb://servicebus.windows.net/helloservice that both service and consumer use to register themselves on the Service Bus. The service must be registered and active on the Service Bus in order for the call from the consumer to get to it. WCF provides a new set of bindings that allow you to address the Service Bus, BasicHttpRelayBinding, WSHttpRelayBinding, NetTcpRelayBinding, etc.

ESB Services Example

As mentioned, the Service Bus allows a publish/subscriber mechanism for service invocation. This allows one call from a consumer to reach several services that expose the same interface. To work with this configuration, the service contracts should not return values. WCF provides a particular binding for this configuration, NetEventRelayBinding.

ESB and Workflow Services Example

WCF also provides context bindings, WSHttpRelayContextBinding and NetTcpRelayContextBinding to be used for WCF-WF integration. These bindings allow for WF Receive Activities (Web Services exposed directly from WF workflows) to receive contextual calls, i.e., there is an extra SOAP Header (instanceID) with a reference to the persisted workflow. Those of you familiar with the WF-WCF integration will easily understand the importance of these two bindings.

ESB Services Example

As we have seen the Service Bus can call any other services available in the cloud and outside it. Worker Roles can also implement services and register them on the Service Bus thus allowing external apps to call them.

Access Control Service Example

Every call to the services is validated against the Access Control Service. The Access Control Service is actually an STS (Security Token Service) that intercepts all calls authenticating the caller of the service and returning a number of claims used to allow the service to authorize the call. Now, this is a complex topic on its own which I will write about on a later entry in this blog. For now, I just wanted to give an idea on how this service is used.

Access Control Service Example

Same thing accessing the web apps in the Web Role hosted services or the storage.

There are a number of possibilities, just use your imagination (and some best practices), and you can, basically, mix and match these services according to your needs. You can build complex processes, applications and services using these building blocks without having to worry about setting up the infrastructure that supports it. The benefits are obvious, and I believe that in the long term this is where IT is heading.

Hosting a WCF Service in Windows Azure

When I first tried to create and deploy a WCF Web Service into the cloud I faced several constraints, some derived from my inexperience with the Azure Platform, some due to the fact that this is still a fairly recent technology from Microsoft, a CTP after all. In the next few paragraphs I will walk you through the steps to create and deploy a WCF service exposed with the WsHttpBinding.

There are a few prerequisites that need to be met in order to proceed with Azure development. To setup the proper development environment one needs to have:

Windows Vista or Windows 2008
Visual Studio 2008 + SP1
Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio
(http://www.microsoft.com/azure/sdk.mspx)
Access to Azure Services Developer Portal
(http://www.microsoft.com/azure/register.mspx)

Now, lets start by creating a new project in Visual Studio of type “Web Cloud Service”

createwebcloudservice1

Leave the configuration and definition files as they were created by Visual Studio. Note that the CTP access permits only one instance, i.e., only one virtual machine, do not change this setting, you can play with it only on the local development Fabric.

Even though all we want is to create and delpoy a WCF Service, leave the “default.aspx” page merely as a faster way to verify that the package was properly deployed. For that just add a label to the page with some text.

deafultaspx

Now add a WCF Service to the project as follows

addservice

Alter the service contract to something a little more demo friendlier like

[ServiceContract]
public interface IService
{
[
OperationContract]
string Echo(stringmsg);
}

public class Service : IService
{
public string Echo(stringmsg)
{
return “Echo: “+ msg;
}
}

Also alter the configuration file (web.config) specifying the security policy for your binding

<bindings>
<
wsHttpBinding>
<
binding name =wsConfig>
<
security mode =None />
</
binding>
</
wsHttpBinding>
</
bindings>
<
services>
<
service behaviorConfiguration=MyCloudService_WebRole.ServiceBehaviorname=MyCloudService_WebRole.Service>
<
endpoint address=“” binding=wsHttpBinding contract=MyCloudService_WebRole.IServicebindingConfiguration=wsConfig>
</
endpoint>
<
endpoint address=mex binding=mexHttpBinding contract=IMetadataExchange />
</
service>
</
services>

Test your web application and service locally right-clicking the default.aspx page and selecting “View in Browser”.

webpagetest

You used the ASP.NET Development Server for this test. If you use the local Azure Development Fabric you will get the following error when you test your service. This appears to be a bug because you do not get the same error once you deploy to the real cloud.

serviceerror

Speaking of deployment, right-click on the MyCloudService project and select “Publish”. Once you select the “Publish” option you should see a browser open on your Azure project as shown bellow, as well as an explorer window opened with the configuration and definition files. Press the “Deploy…” button and follow the instructions.

deploy1

Press the “Run” button to test you web app and service, this will take several minutes while starting your VM.

run

To test your app simply press the temporary DNS name provided and you should get something similar to

webpagetest1

Now, change the URL to address the Web Service and you should get

cloudservicetest

Notice that the URL for the WSDL provided by Azure is an internal URL which is not resolved, this has been reported as a bug and will be fixed. To view your WSDL simply change the URL at the browser to http://8f513536-a984-47e5-ac32-283f32b2d51d.cloudapp.net/service.svc?wsdl

wsdltest

Now, promote your project to the production environment

promote

promoted

This should be quite fast since it is only changing the DNS with which your app is exposed.

Our test would not be completed without building a client that actually called the service, so let´s do it. Since the provided WSDL on the cloud has references to URLs that are not resolved from the client the best way to build the client is to run the service locally with the “ASP.NET Development Server”. For that simply double-click the “ASP.NET Development Server”

aspnetdevserver

And browse to the WSDL

servicetest1

Then add a console application to the solution as follows

clientproject

Reference the Web Service to create the proxy

servicereference

And add the following code to the main function

static void Main(string[] args)
{
ServiceReference1.
ServiceClient proxy = new ServiceReference1.ServiceClient();
Console.WriteLine(proxy.Echo(“Hello Cloud World!”));
proxy.Close();
Console.ReadLine();
}

First, test it locally, then change the address in the configuration file to the one in the cloud

<client>
<endpoint address=http://icloud.cloudapp.net/Service.svc binding=wsHttpBindingbindingConfiguration=WSHttpBinding_IService contract=ServiceReference1.IServicename=WSHttpBinding_IService>
</
endpoint>
</
client>

Compile it and run it against the cloud. You should get an exception as follows

“The message with To ‘http://icloud.cloudapp.net/service.svc&#8217; cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver’s EndpointAddresses agree.”

This is due to a verification made by the default “EndpointAddressMessageFilter” that detects a mismatch between both addresses. The cause of this may be related to the virtualization of the service address probably related to the internal assigned address ““. The following code was retrieved with the Reflector and shows the logic behind the “Match” function.

public override bool Match(Message message)
{
if (message == null)
{
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgumentNull(“message”);
}
Uri to = message.Headers.To;
Uriuri = this.address.Uri;
return (((to != null) && this.comparer.Equals(uri, to)) && this.helper.Match(message));
}

Fortunatelly, there is a behavior to resolve this problem, add it to the ServiceHost as shown bellow, recompile and redeploy the service to the cloud

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
public class Service : IService
{
public string Echo(stringmsg)
{
return “Echo: “+ msg;
}
}

Run the client console application again and this time you should get a response back from your cloud service.

client1