Archive Page 2

Enterprise Architecture (EA)

The concept of Enterprise Architecture has been around for quite some time, still, not many organizations have embraced it nor identified the human resources (Enterprise Architects) to implement it.

In fact , many organizations have one or more isolated architecture initiatives, typically, a Technology Architecture that addresses the infrastructure (networking, servers, operating systems, DBMS, messaging, etc.), and, an Application Architecture that addresses the development (web, services, db, etc.). Without an Enterprise Architecture to glue all these (and more as we will see) architectures together, there is a risk of overlapping efforts, poor coordination between teams, misalignment between the IT and the business needs, and ultimately a less effective organization.

So, what exactly is an Enterprise Architecture? An Enterprise Architecture is, basically, a “blueprint” to systematically and completely define an organization´s current (baseline) and desired (target) environment. This “blueprint” allows an organization to achieve its goals through optimal performance of its core business processes within an efficient information technology environment. Enterprise architectures are essential for evolving information systems and developing new systems that optimize the organization’s mission value. This is accomplished in logical or business terms (e.g., mission, business functions, information flows, and systems environments) and technical terms (e.g., software, hardware, communications), and includes a Sequencing Plan for transitioning from the baseline environment to the target environment.

EA

Now that we´ve seen that one of the EA main objectives is to align IT investments to the business needs, we are going to dive a little deeper into the concept.

EA Details

  • Organization Architecture – Defines the decision chain, human resources, cultural characteristics, organization topology, competencies, responsibilities, ownerships, etc.
  • Business Architecture – Describes the organization “modus operandi”, describes the business processes and the human resources involved.
  • Information Architecture – Manages the information needed to support the business in an abstract manner, independently of the technology and particular process or application implementations.
  • Application Architecture – Modulates the applications and uses the information to support the business needs in a non abstract manner  
  • Technology Architecture – Manages the infrastructure that supports the communications, applications, and information, needed to run the business
  • Security Architecture – Establishes and controls the security policies at every layer according to the principles, defense-in-depth & secure-by-design.
  • Governance – Monitors and ensures the overall health of the Enterprise Architecture

One of the Enterprise Architecture´s goals is to identify a set of principles, technologies, and products that can or should be used on each module. The following diagram presents an example of technologies and principles that should be used in a particular organization.

 EA Details

Enterprise Architecture is an ever evolving concept. Initially it was, basically, an IT concept but currently contains business and organizational concepts in its core. In most organizations, business drives the IT so this was expected. The following diagram shows the original concept, IT Architecture, and how that concept now fits in on this new model.

EA

In conclusion, an Enterprise Architecture mainly helps you to:

  1. Guarantee the alignment between business needs and IT initiatives. Most organizations are driven by business plans, not IT, so it is extremely important (especially nowadays) that IT investments reflect directly in terms of business gains.
  2. Identify from a myriad of existing technologies, standards, products, and principles, which ones make sense to use in the organization, how to use them, and where to use them.
  3. Define a “blueprint” of the organizations current environment (baseline).
  4. Establishes a set of best practices, guidelines, and roadmaps for the evolution of the overall organization (target)
  5. Avoid ad-hoc implementations that disperse from the necessary global homogeneity
  6. Define the human resources organization and the role each plays in the organization defining their competences, responsibilities, and ownerships.

There are a few EA methodologies available among which, The Zachman Framework for Enterprise Architectures, The Open Group Architectural Framework (TOGAF), The Federal Enterprise Architecture, and The Gartner Methodology, are some of the most popular. What I presented here was a very succinct example of an instantiation of an Enterprise Architecture, one that in my experience makes sense to define. Different organizations may require different approaches. It is, however, in all cases mandatory to identify a set of experienced architects to define and maintain the Enterprise Architecture. These architects are known as Enterprise Architects. They also have a supervision role to ensure that every implementation in the organization is in agreement with the defined EA.

The objective of this article is to give you an overview of the concept of EA and emphasize its importance. It, basically, helps everyone in an organization to focus on one common objective, giving everyone a role in the big scheme of things and providing guidelines for evolution. It is not intended to be an exhaustive description of the subject. There are innumerous articles and documentation on the subject that will allow you to delve deeper into this important concept.

The Azure Platform and Usage Scenarios

We have all seen the picture below showing the main blocks of the Azure Platform. But how, exactly, do all of these services work together to provide us a cloud solution? As I usually say, Microsoft is not extremely good at inventing new things, but is extremely good at turning a particular idea into a great product which really makes sense to use. With Azure, Microsoft took all its great products and frameworks and allowed us to use them on a cloud environment in a way very similar to the one we usually use on our on-premises environment. That´s not quite true; actually, they made it even easier.

Azure Services Platform

Usually what we get from a cloud computing solution is an abstraction from the underlying hardware and software where our app is going to run on. Some providers even offer other services such as database management systems, but few go as far as providing content management services, CRM services, mesh services, access control services, SQL reporting and analysis services, a service bus, workflow services, etc.

The picture above shows the three main blocks of the Azure Platform, the operating system, the services, and some client portals. Additionally, we get a development environment that allows us to use our favourite programming language. Of course, you will not get an ORACLE database or an Apache Web Server but for the most part you don´t even need to know it is an SQL Server or an IIS under the covers. Zooming in, we get to see the functionalities addressed within each of these main blocks.

Azure Services Platform Details

Within the Windows Azure, the operating system for the cloud, we have two main services; management services, also known as Fabric Controller, that takes care of all the virtualization, deployment, scaling, logging, tracing, failure recovery, etc., and the storage system that provide us with a simple way to keep our data in blobs, tables (not SQL tables), and queues. The technologies that we can use to reach all of these functionalities are various, REST, JSON, HTTP, SOAP, etc. To host our apps and services, we have IIS7 and the Framework.NET 3.5 that allows us to expose our services any way we want, through the less standard REST, to the more standard WS*. Whoever is familiar with WCF will naturally take advantage of a new set of bindings that allow your services to be exposed through the new Service Bus on a direct or publisher/subscriber fashion.

The services layer provides Live Services, from mesh services that allow you to share and synchronize folders and files, to Identity and Directory Services to manage access to resources and applications. The .NET Services consist of Access Control Services, a Service Bus Service, and a Workflow Service. The Access Control Services , built using the “Geneva” Framework, is basically an STS (Security Token Service) that provides a claims based identity model, along with federation capabilities through WS* standards, that provides authentication and authorization services to anyone trying to access the services layer. The Service Bus, formerly BizTalk Services (I´m glad they changed the name), basically provides publish/subscribe functionality for calling services, as well as location unawareness between the service and the service consumer. The Workflow Service provides service orchestration and integration with the Service Bus and Access Control Service to provide more complex functionalities. It also provides all the functionalities that you can find on Workflow Foundation, like support for long running workflows, workflow designer, etc. The SQL Services provide typical data, reporting and analysis services.

In broader terms, the Azure Platform provides the services shown below:

Azure Platform

We are, now, going to take a quick look at some usage scenarios; how exactly can all of these services and functionalities work together to compose complex applications, processes and services. Some of these I have tried myself and will be posting shortly some practical examples of it.

Example

This is a simple use of the Storage environment for applications that require merely a way to keep their data in a persistent store. Microsoft provides Tables, Queues and Blobs, and is working on new ways to store your data, namely File Streams, Locks, and Caches. Tables allow you to store data in a similar way to a DBMS, but, in fact, there is no SQL Server involved. Queues allow you to temporarily store data for processing and are a good way to relay data from one service to another, as we will see. Blobs are more oriented to store unstructured data such as different file formats.

Web Role Example

In this scenario, we want to deploy to the cloud an ASP.NET Web Application that possibly uses some storage to keep some of its data. For this, we use the Hosted Services capability, namely, a Web Role to host the Web App.

Web and Worker Role Example

In this example, we are extending the previous example to use a Worker Role to do some background asynchronous processing. A way to relay the data to be processed is to send it through a Queue. The web application posts the data to be processed in the queue and the worker role is periodically checking for data to be processed. Web Roles are, basically, web applications or services hosted in the IIS. Worker Roles are NT services with a specific interface similar to the SCM (Service Control Manager) that are constantly running and looking for something to do.

ESB Services Example

As explained in the previous scenario, Worker Roles are constantly running, looking for, or waiting for something to do. Worker Roles can access the Storage or call external services through the Service Bus, to collect data, send data, or simply notify an external service of some event.

Worker role Example

In fact, Worker Roles can call any service within the cloud.

Worker Role Example

Your applications, services, or any other processes can call into any of the services provided by Azure directly to enrich their functionality. The .NET Services, on their own, can then call other services and/or interact with the Storage system. Through the Storage system we can trigger worker processes to do some asynchronous work for us.

ESB Services Example

The Service Bus is a powerful and useful service that basically acts as mediator between consumers and services. This mediation can be accomplished in two ways; one that allows direct calls from a consumer to a service, another in a publish/subscribe fashion. In both cases, there is no knowledge of the location of the service; the consumer addresses the Service Bus unaware of the service location. This addressing is accomplished through a URL of type sb://servicebus.windows.net/helloservice that both service and consumer use to register themselves on the Service Bus. The service must be registered and active on the Service Bus in order for the call from the consumer to get to it. WCF provides a new set of bindings that allow you to address the Service Bus, BasicHttpRelayBinding, WSHttpRelayBinding, NetTcpRelayBinding, etc.

ESB Services Example

As mentioned, the Service Bus allows a publish/subscriber mechanism for service invocation. This allows one call from a consumer to reach several services that expose the same interface. To work with this configuration, the service contracts should not return values. WCF provides a particular binding for this configuration, NetEventRelayBinding.

ESB and Workflow Services Example

WCF also provides context bindings, WSHttpRelayContextBinding and NetTcpRelayContextBinding to be used for WCF-WF integration. These bindings allow for WF Receive Activities (Web Services exposed directly from WF workflows) to receive contextual calls, i.e., there is an extra SOAP Header (instanceID) with a reference to the persisted workflow. Those of you familiar with the WF-WCF integration will easily understand the importance of these two bindings.

ESB Services Example

As we have seen the Service Bus can call any other services available in the cloud and outside it. Worker Roles can also implement services and register them on the Service Bus thus allowing external apps to call them.

Access Control Service Example

Every call to the services is validated against the Access Control Service. The Access Control Service is actually an STS (Security Token Service) that intercepts all calls authenticating the caller of the service and returning a number of claims used to allow the service to authorize the call. Now, this is a complex topic on its own which I will write about on a later entry in this blog. For now, I just wanted to give an idea on how this service is used.

Access Control Service Example

Same thing accessing the web apps in the Web Role hosted services or the storage.

There are a number of possibilities, just use your imagination (and some best practices), and you can, basically, mix and match these services according to your needs. You can build complex processes, applications and services using these building blocks without having to worry about setting up the infrastructure that supports it. The benefits are obvious, and I believe that in the long term this is where IT is heading.

Hosting a WCF Service in Windows Azure

When I first tried to create and deploy a WCF Web Service into the cloud I faced several constraints, some derived from my inexperience with the Azure Platform, some due to the fact that this is still a fairly recent technology from Microsoft, a CTP after all. In the next few paragraphs I will walk you through the steps to create and deploy a WCF service exposed with the WsHttpBinding.

There are a few prerequisites that need to be met in order to proceed with Azure development. To setup the proper development environment one needs to have:

Windows Vista or Windows 2008
Visual Studio 2008 + SP1
Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio
(http://www.microsoft.com/azure/sdk.mspx)
Access to Azure Services Developer Portal
(http://www.microsoft.com/azure/register.mspx)

Now, lets start by creating a new project in Visual Studio of type “Web Cloud Service”

createwebcloudservice1

Leave the configuration and definition files as they were created by Visual Studio. Note that the CTP access permits only one instance, i.e., only one virtual machine, do not change this setting, you can play with it only on the local development Fabric.

Even though all we want is to create and delpoy a WCF Service, leave the “default.aspx” page merely as a faster way to verify that the package was properly deployed. For that just add a label to the page with some text.

deafultaspx

Now add a WCF Service to the project as follows

addservice

Alter the service contract to something a little more demo friendlier like

[ServiceContract]
public interface IService
{
[
OperationContract]
string Echo(stringmsg);
}

public class Service : IService
{
public string Echo(stringmsg)
{
return “Echo: “+ msg;
}
}

Also alter the configuration file (web.config) specifying the security policy for your binding

<bindings>
<
wsHttpBinding>
<
binding name =wsConfig>
<
security mode =None />
</
binding>
</
wsHttpBinding>
</
bindings>
<
services>
<
service behaviorConfiguration=MyCloudService_WebRole.ServiceBehaviorname=MyCloudService_WebRole.Service>
<
endpoint address=“” binding=wsHttpBinding contract=MyCloudService_WebRole.IServicebindingConfiguration=wsConfig>
</
endpoint>
<
endpoint address=mex binding=mexHttpBinding contract=IMetadataExchange />
</
service>
</
services>

Test your web application and service locally right-clicking the default.aspx page and selecting “View in Browser”.

webpagetest

You used the ASP.NET Development Server for this test. If you use the local Azure Development Fabric you will get the following error when you test your service. This appears to be a bug because you do not get the same error once you deploy to the real cloud.

serviceerror

Speaking of deployment, right-click on the MyCloudService project and select “Publish”. Once you select the “Publish” option you should see a browser open on your Azure project as shown bellow, as well as an explorer window opened with the configuration and definition files. Press the “Deploy…” button and follow the instructions.

deploy1

Press the “Run” button to test you web app and service, this will take several minutes while starting your VM.

run

To test your app simply press the temporary DNS name provided and you should get something similar to

webpagetest1

Now, change the URL to address the Web Service and you should get

cloudservicetest

Notice that the URL for the WSDL provided by Azure is an internal URL which is not resolved, this has been reported as a bug and will be fixed. To view your WSDL simply change the URL at the browser to http://8f513536-a984-47e5-ac32-283f32b2d51d.cloudapp.net/service.svc?wsdl

wsdltest

Now, promote your project to the production environment

promote

promoted

This should be quite fast since it is only changing the DNS with which your app is exposed.

Our test would not be completed without building a client that actually called the service, so let´s do it. Since the provided WSDL on the cloud has references to URLs that are not resolved from the client the best way to build the client is to run the service locally with the “ASP.NET Development Server”. For that simply double-click the “ASP.NET Development Server”

aspnetdevserver

And browse to the WSDL

servicetest1

Then add a console application to the solution as follows

clientproject

Reference the Web Service to create the proxy

servicereference

And add the following code to the main function

static void Main(string[] args)
{
ServiceReference1.
ServiceClient proxy = new ServiceReference1.ServiceClient();
Console.WriteLine(proxy.Echo(“Hello Cloud World!”));
proxy.Close();
Console.ReadLine();
}

First, test it locally, then change the address in the configuration file to the one in the cloud

<client>
<endpoint address=http://icloud.cloudapp.net/Service.svc binding=wsHttpBindingbindingConfiguration=WSHttpBinding_IService contract=ServiceReference1.IServicename=WSHttpBinding_IService>
</
endpoint>
</
client>

Compile it and run it against the cloud. You should get an exception as follows

“The message with To ‘http://icloud.cloudapp.net/service.svc&#8217; cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver’s EndpointAddresses agree.”

This is due to a verification made by the default “EndpointAddressMessageFilter” that detects a mismatch between both addresses. The cause of this may be related to the virtualization of the service address probably related to the internal assigned address ““. The following code was retrieved with the Reflector and shows the logic behind the “Match” function.

public override bool Match(Message message)
{
if (message == null)
{
throw DiagnosticUtility.ExceptionUtility.ThrowHelperArgumentNull(“message”);
}
Uri to = message.Headers.To;
Uriuri = this.address.Uri;
return (((to != null) && this.comparer.Equals(uri, to)) && this.helper.Match(message));
}

Fortunatelly, there is a behavior to resolve this problem, add it to the ServiceHost as shown bellow, recompile and redeploy the service to the cloud

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
public class Service : IService
{
public string Echo(stringmsg)
{
return “Echo: “+ msg;
}
}

Run the client console application again and this time you should get a response back from your cloud service.

client1