Pages

Banner 468

Sunday, 1 September 2013

IIS Application Request Routing

0 comments
 

Introduction

In this article series, the author is exploring the use of IIS Application Request Routing to publish Exchange 2013 services such as Outlook Web App out to the Internet. In the first part we looked at what IIS Application Request Routing is, how it works, and went through its installation steps. In this article we will start configuring it to work with our Exchange environment.

Achieving High Availability and Scalability

As we saw in the first article of this series, IIS Application Request Routing (ARR) is a proxy-based routing module that forwards HTTP requests to content servers based on HTTP headers, server variables and load balance algorithms. A typical ARR deployment is illustrated in the diagram below:
Image
Figure 2.1: Example of an ARR Deployment
While ARR provides high availability and scalability for the content servers, the overall deployment is not highly available or scalable because ARR is a single point of failure and the scalability of the content servers is limited by the maximum capacity of the ARR server used.
In order to overcome these challenges, you should consider using multiple ARR servers with load balancers. ARR can be deployed in active/passive mode to only achieve high availability or in active/active mode to achieve both high availability and scalability. Load balancers’ layer 3 and layer 4 functionality compliments ARR's strength in making routing decisions based on layer 7, such as HTTP headers and server variables. At the same time, ARR does not provide fault tolerant deployment features for itself and must rely on other technologies and solutions to achieve high availability for the ARR tier, as shown below:
Image
Figure 2.2:
Example of an ARR Deployment with Load Balancers

Configuring Application Request Routing v2.5

Now that ARR is installed, we can start configuring it to publish our Exchange services such as Outlook Web App (OWA).
The first step is to create a farm with all the Exchange 2013 CAS servers that will be responsible for serving OWA requests. To do so:
  1. Launch IIS Manager;
  2. Right-click on Server Farms and select Create Server Farm...:
Image
Figure 2.3:
Application Request Routing Server Farms
  1. Give the server farm a friendly name and click Next:
Image
Figure 2.4: Specify a Web Farm Name
  1. Specify the servers’ addresses you want to add to the farm (you can also use the FQDN of the servers). Advanced settings lets you change the TCP ports that will be used as well as the weight each server has, which we do not need to configure in this scenario. You can also specify upfront if you want any of the servers to be added as offline. This can be useful when you are setting up ARR for servers that are not yet fully configured or operational.
Image
Figure 2.5:
Adding Servers to the Farm
  1. Click Finish to complete the creation of the farm;
  2. In the Rewrite Rules message box, click Yes. This will make ARR automatically create and configure the rewrite rules we will be using later on:
Image
Figure 2.6:
Rewrite Rules Automatic Creation
Once the farm has been created, it is time to configure it. If you click on Servers, you will get an overview of the status of all the servers in the farm:
Image
Figure 2.7:
Server Status
If you click on the name of the farm itself, in this case Exchange – OWA, you are presented with several options to configure and manage the farm. Let us go through all the available options:
Image
Figure 2.8:
Farm Configuration and Management Options

Caching

By default, everything that passes through ARR is cached in memory for 60 seconds (note that disk caching is also enabled by default). This means that if two users request the same resource within 60 seconds, ARR does not need to go back to the same resource provider to get it that second time.
Unselect the Enable disk cache option to disable the disk cache and click Apply:
Image
Figure 2.9:
Disabling Disk Cache

Health Test

In this page we can configure health settings and set the properties for URL testing and live traffic testing. The Live Traffic test leverages the live requests, allowing ARR to mark a server as unhealthy based on configurable conditions. However, we cannot use this test to determine if an unhealthy server has become healthy because ARR does not forward live requests to servers that are currently unhealthy.
URL Test tests a specified URL against one or more of the following conditions:
  • A response was received within the configured timeout period;
  • The HTTP status meets the configured acceptable status codes;
  • The body of the response contains the specified text configured in the response match.
When load balancing requests across multiple servers, as we will see shortly, if any of these conditions fail for a server, that server is marked as unhealthy and is not used to serve user requests.
As this feature is limited to using a single URL, it is recommended to create a test page with the overall health of the server (this can come from Operations Manager for example) as ARR can be configured to look for specific words in that test page.
Alternatively, if the URL is set to the FQDN of the ARR server, the test is performed against all servers configured in the farm. As such, we can easily configure ARR to check the OWA webpage across all servers in the farm by using this method:
Image
Figure 2.10:
Application Request Routing - Health Tests
Response match is an optional test to make sure that the body of the response contains the expected string. If you customized your OWA logon page, for example, you can insert here a word that you expect to see every time a user successfully navigates to OWA.
The Minimum servers option specifies the minimum number of healthy servers that you must have to appropriately service the expected volume of traffic. When there are less healthy servers than the specified number of minimum servers, the health of the servers is ignored to continue to provide services to users.
Using Verify URL Test we can send a GET request using the value specified in the URL to all application servers defined in the server farm. In my scenario, only two servers were tested as the third was added as being offline, and the second server failed the test because it does not exist:
Image
Figure 2.11:
Verifying URL Test Results

Load Balance

Here we configure how to route user requests to the servers in the farm. The default option is Least current request, which is probably the most used one as normally administrators want to send requests to the server that currently has the least number of requests. You will see in the next section how ARR tracks the number of requests for each server.
Image
Figure 2.12:
Application Request Routing - Load Balancing Options

Monitoring and Management

In this section we can monitor and manage the servers in our farm. ARR provides useful statistics regarding servers such as their health status, how many requests each server has received and responded to, how many requests failed, etc. We can already see that, due to the health test we configured, the second server is marked as being Unhealthy, making ARR not sending any user requests to it.
We will come back to this section after we configure everything in order to check how everything is working.
Image
Figure 2.13:
Application Request Routing - Monitoring and Management
We can also take servers offline or simply configure them to not accept new connections (similar to draining a server in a cluster):
Image
Figure 2.14:
Options for Managing Servers
Both management and monitoring options are discussed in more detail in the next articles of this series.

Proxy

This section allows us to configure how packets are forwarded to the servers in the farm. For example, we can add the X-Forwarded-For information to requests to allow us to see who the actual client was (useful when troubleshooting):
Image
Figure 2.15:
Application Request Routing - Proxy Options
Here, change the time-out (seconds) value to 180 and the Response buffer threshold (KB) to 0. Setting the timeout to 180 should prevent clients from disconnecting and reconnecting unexpectedly. However, this setting needs to be tested for each deployment. This setting is particularly important if you are configuring ARR for Outlook clients with Exchange 2010.

Routing Rules

This is where we configure our server farm to use the URL Rewrite functionality (that we will see in the next article) as well as SSL offloading (enabled by default).
SSL offloading is often used to help maximize the resources on the application servers, since they do not have to spend cycles in encrypting and decrypting requests and responses. However, when this feature is enabled, all communication between the ARR server and the application servers are done in clear text, even for HTTPS requests from clients to the ARR server. For this scenario, we will not be using SSL offloading, so uncheck the Enable SSL offloading box and then click Apply to save the changes:
Image
Figure 2.16:
Application Request Routing - Routing Rules

Server Affinity

In this final section we can configure “sticky sessions”. If we want users to go to the same server they used on their first connection, we can enable Client affinity and ARR will put a cookie in their session to help it determine to which server the user should go to on subsequent connections/requests. As we are not interested in client affinity, leave this setting disabled.
Image
Figure 2.17:
Application Request Routing – Server Affinity
If you are using ARR to publish RPC over HTTP (Outlook Anywhere) in Exchange 2007/2010, you should also make the following change:
  1. Under the IIS root, open Request Filtering:
Image
Figure 2.18: IIS Request Filtering
  1. Under the Actions pane, click Edit Feature Settings...:
Image
Figure 2.19:
Edit Feature Settings
  1. Increase the Maximum allowed content length to 2147483648 (2GB):
Image

Leave a Reply