Visit blogadda.com to discover Indian blogsblogarama - the blog directory

Saturday, May 16, 2009

SQL Injection - An illustration

SQL Injection - An illustration


Using SQL injection is one of the most common means of exploiting the security loop holes in any web application. In the article that follows, you can see how easily one can take advantage of this to gain unauthorized access of data that she was not supposed to. Lets us see how but with a brief of background...


Purpose of using SQL:
SQL statements are generally used to retrieve, update and delete data against a web application’s database. This is normally done behind the scenes and the results are displayed to a user based on their authority level. This means that the data is protected and access is granted on a selective basis.


How Security issue is relevant here:
Many web applications provide some form of search capabilities where users can provide their own filtering on the data the application might display. For example, a filter to see only the records posted in 2009. If the application is not secure, a hacker can potentially exploit this functionality. Rather than supplying a value to filter upon, he might provide another SQL statement that is then injected in to the SQL statement that the application uses to retrieve data.
Attack Example – Let’s assume a user only has access to the records of his department and to filter through the records, he enters some criteria. He wants to see the latest records, so he enters 2009 in the year range to filter records as per the criteria specified.
The application might attempt to execute the following statement against the database: SELECT * FROM … WHERE … AND Year = 2009
A hacker on the other hand, might try to trick the application and enter the following into that same year range field: 2009 OR 1=1
The application, if not careful, might then execute the following statement against the database: SELECT * FROM … WHERE … AND Year = 2009 OR 1=1
This would potentially provide the user with access to all the records in the system, even the ones to which they shouldn’t... :( :(


Incredibly simple...!! Isn't it...!!

Common ASP.NET Security Flaws

Common ASP.NET Security Flaws
There is a wide array of attacks that ASP.NET web applications need to protect against but most security holes are due to flaws in the following:


Authentication:
Making it easy for attackers to reveal users credentials, or worse to circumvent the application’s authentication altogether.
Possible deficiencies: lack of password policy (strong passwords, expiration date etc), passing internal messages back to the browser, using dynamic SQL on the login page (SQL injection), using cookies and other insecure means to store users’ credentials, and passing user names and passwords in clear text .
Possible attacks: network eavesdropping, brute force & dictionary attacks, SQL injection (on login page), Cookie replay attacks and credential theft.


Authorization:
Allowing logged-in users to perform actions without authorization verification (i.e. vertical & horizontal privilege escalation.)
Possible deficiencies: inconsistent checks for user authorization for every user’s request and web page, lack of data validation and trusting data submitted by users (i.e. cookies, hidden fields, URL parameters, etc.)
Possible attacks: privilege escalation attacks (horizontal and vertical), disclosure of confidential data and Data tampering attacks.


Data Validation:
Trusting data submitted by the user and acting upon it.
Possible deficiencies: lack of consistent and strict data validation throughout the web, and failing to encode data sent to the browser.
Common Attacks: cross-site scripting (XSS), SQL injection, data tampering (query string, form fields, cookies, and HTTP headers), embedded malicious characters and HTTP response splitting.


Application Configuration:
Using default configuration on the application and hosted server.
Possible deficiencies: granting the application more permissions than it actually needs, failing to properly secure resources (operating system, database, etc.) and passing internal application information back to the browser (internal messages, exceptions and trace information.)
Common Attacks: unauthorized access to administrator functionality, unauthorized access to configuration information, retrieval of clear text configuration information and unauthorized access to data stores.

Final Note:

An attack or even a request for a security audit by a customer can cost you time, money and potentially your reputation. So its better you take care these common issues else be ready to face the consequences.

Saturday, May 9, 2009

Session Management in .NET

Manage Session State on the Server




The .NET Framework offers several built-in, easy-to-use features that provide a clean approach to server-side state management. Some of these features support state across Web farms (multiple load-balancing Web servers).

Store State in the Session ObjectThe Session object provides a collection for storing all sorts of items about a user's session. The first time a user requests a page in your application, ASP.NET creates a session environment for this user on the Web server. ASP.NET exposes the Session object as a collection for you to store and retrieve state information for this user. You use the Session object like any other collection to store anything from simple data types to complex objects and structures. ASP.NET assigns a unique ID for each session to isolate an individual user's private state information. Each active ASP.NET session is identified and tracked using a 120-bit SessionID string containing URL-legal ASCII characters. SessionID values are generated using an algorithm that guarantees uniqueness so that sessions do not collide, and SessionID’s randomness makes it harder to guess the session ID of an existing session.ASP.NET uses a temporary cookie (which is stored in client RAM, then discarded when the user closes the browser) to pass the session ID between the browser and the Web server. It's important to understand that only the session ID—a small value—gets passed between client and server. The state information itself is stored on the Web server in RAM without ever crossing the wire (see
Figure 1).

Figure 1: Manage User State With Sessions.ASP.NET stores each user's state information in a session environment in RAM on the Web server. ASP.NET uses a temporary cookie or URL munging to pass the unique session ID between client and server. Your application uses the Session object as a collection for accessing each user's state information.
After a period of inactivity from the client (20 minutes by default), user sessions time out and are discarded from server RAM. //to store information
Session[“myname”]= “Lloyd”;
//to retrieve information
myname=Session[“myname”];
Sessions don't work at all if the user has disabled cookies, because the Web server uses cookies to pass the session ID. Fortunately, ASP.NET has decorated the Session object with two new features that address these problems: cookieless sessions and out-of-process state management.
As the name implies, cookieless sessions enable the Session object even if the user turns off cookie support in the browser. Enabling this feature is as simple as setting an attribute in the web.config file:
mode="OffInProcStateServerSqlServer"
stateConnectionString="tcpip=127.0.0.1:42424"
sqlConnectionString="data source=127.0.0.1;user id=sa;password="
cookieless="truefalse"
timeout="20"
/>



Then, ASP.NET auto-magically inserts the session ID into the URL of every link in your application, rather than using a cookie to pass the session ID back and forth over the wire. For example, http://localhost/PageA.aspx becomes http://localhost/(w1fmnnqzif4k1bnuarqrwinq)/PageA.aspx.

In the past, only few developers could employ this technique (commonly referred to as "URL-munging") through tedious coding. Now, ASP.NET makes it easy, elegant, and accessible to all. ASP.NET is smart enough to insert the session ID into the URLs of every anchor tag and form action in your application. Cookieless sessions guarantee that your application functions, regardless of cookie support on the client. If you need to "manufacture" a URL for passing to an external application, you can use the Response.ApplyAppPathModifier method, which accepts any URL and "munges" it with the session ID. In this way, the external application can call back into your application with the appropriate session ID: string sMungedUrl = Response.ApplyAppPathModifier(“PageA.aspx"); However, if you request Another page without the embedded Session ID (http://localhost/WebForm2.aspx), the state is lost and the ASP.NET framework issues a new Session Id. Also 1. Fully qualified URLs in the response.redirect, server.transfer, and FORM action tags cannot be used. Here is an example of a fully qualified.
2. Root addressing can also cause problems with response.redirect, server.transfer, and FORM action tags. In other words /home/default.aspx cannot be used. You'd have to reference it using relative addressing. For example, home/default.aspx

Scale Up or Scale OutOut-of-process state management deals with the issue of scalability. You have two ways to handle the demand of many sessions with lots of state information. The first is to "scale up"—add more RAM and more CPUs to the server until you hit the ceiling on maximum memory and processors. The second is to "scale out"—add more servers. A scaled-out configuration is commonly referred to as a Web farm, where each server in the farm runs the same ASP.NET application and the collection of servers appears to the outside world as a single site. This provides dynamic load-balancing by distributing client demand evenly across a set of servers.
Classic ASP applications can't take full advantage of Web farms. Session state is stored in RAM, so the user must always be directed to the server that stores his state information. Once a user's initial page request hits a server, the user is tied to that particular server for all subsequent page requests.

Figure 2: Scale Up With Out-Of-Process Sessions.You can configure a Web farm to load-balance a demanding user base. ASP.NET can store session information on a dedicated state server either in RAM (using the ASP.NET State Service) or on disk (using SQL Server). Client requests are satisfied by any server in the farm, which in turn communicates with the state server for session information.

ASP.NET solves this problem by providing "out-of-process" state management. This feature removes session state from the Web server and places it in another process on another machine called the state server (see Figure 2). The Web servers in the farm communicate with the state server to store and retrieve session information. True load-balancing is achieved, and any Web server in the farm can process any page request issued by any client at any time. Furthermore, Web servers can be taken down and brought back online without disrupting active user sessions.
ASP.NET generates unique IDs for each machine in the network automatically, by default. You configure a Web farm by setting each server's machine key to the same value. Edit the machine.config file (located in the C:\winnt\Microsoft.NET\Framework\vn.n.n\CONFIG directory) on each server and find the machineKey tag. Set the validationKey and decryptionKey attributes to a hex value (any value will do, as long as you use the same value on all machines):

You have two options for configuring a state server: Use the ASP.NET State Service or use SQL Server. The ASP.NET State Service uses RAM on the state server to store session information for all Web servers in the farm. This service is off by default; in a production environment, set its startup mode to "Automatic" in the Computer Management services console. Then, set two attributes in the web.config file of each Web server in the farm to enable the feature and identify the state server's IP address (leave the port at the default value of 42424):

mode="StateServer" stateConnectionString="tcpip=192.168.0.7:42424" .../>

Achieve Maximum ScalabilityThe SQL Server option stores session information in a database on the state server, and is available only if you have a SQL Server license. Although you incur a slight performance penalty by accessing a database rather than RAM, this option provides the greatest scalability, because database sizes are virtually unlimited compared with RAM. SQL Server uses caching extensively, so recently accessed state information is frequently retrieved from RAM anyway, which boosts performance. Furthermore, ASP.NET is smart enough to use a varbinary column for state information smaller than 7,000 bytes, and it uses a less efficient image column only if the state information exceeds 7,000 bytes. One caveat: You must ensure that any objects you store in Session are serializable if you want to use this feature.

Use Query Analyzer to execute the script file InstallSqlState.sql (located in the C:\winnt\Microsoft.NET\Framework\vn.n.n folder) to create the stored procedures ASP.NET requires for using SQL Server. ASP.NET uses tempdb to store session information for performance reasons, so sessions are lost if SQL Server goes down. You can modify the script (at your own risk) to use another database if you want truly durable sessions that survive server reboots.
Set two attributes in the web.config file of each of the farm's Web servers to enable the feature and identify SQL Server's IP address:
You can improve performance slightly for pages that only need to read but not write Session variables by including the EnableSessionState="ReadOnly" attribute in the <%@ Page %>tag directive. You can also turn off sessions for pages that don't need them by specifying EnableSessionState="False" for even better performance of those pages.

Summary
Below is a quick summary of the different modes of session state available in ASP.NET:
Storage location
InProc - session kept as live objects in web server (aspnet_wp.exe). Use "cookieless" configuration in web.config to "munge" the sessionId onto the URL (solves cookie/domain/path RFC problems too!)
StateServer - session serialized and stored in memory in a separate process (aspnet_state.exe). State Server can run on another machine
SQLServer - session serialized and stored in SQL server
Performance
InProc - Fastest, but the more session data, the more memory is consumed on the web server, and that can affect performance.
StateServer - When storing data of basic types (e.g. string, integer, etc), in one test environment it's 15% slower than InProc. However, the cost of serialization/deserialization can affect performance if you're storing lots of objects. You have to do performance testing for your own scenario.
SQLServer - When storing data of basic types (e.g. string, integer, etc), in one test environment it's 25% slower than InProc. Same warning about serialization as in StateServer.


REMOTING IN .NET

.NET Remoting

Before going into the discussion of .Net Remoting let us talk some important objects involved in .Net remoting.
The process of Packaging and unpacking and sending the method calls across the different application domains via serialization and deserialization is called as Marshalling.
Marshalling is done by the object called Sink. Sink is an object that allows custom processing of messages during remote invocation.
Channels are objects used to transport the messages through the network or across different application domains
Application Domain is the Logical construct of the CLR that is the unit of Isolation for an application which guarantees, Each Application can be Independently stopped, An application can not directly access code or resource of the other application, a fault in one application will not effect the other application CLR allows multiple applications in a single Process by Implementing Application Domains.

.Net Remoting enables objects in different application domains to talk to each other. The real strength of remoting is in enabling the communication between objects when their application domains are separated across the network. The .Net Remoting Framework provides number of services like Activation, Lifetime control, Communication Channels for transporting messages to and from remote application. Formatters are used for encoding and decoding the messages before the channel transmits them. Applications can use binary Formatter where Performance is critical and XML where Interoperability is Critical.
There are really 7 steps that are mainly involved in understanding the .Net Remoting.
When a Client object wants to create an instance of the server object (to access the remote object) the remoting System Framework will create a proxy (Transparent Proxy) of the server object on the Client side, that contains list of all classes, as well as interface methods of the remote object. The TransparentProxy class gets registered with the CLR.
The Proxy object behaves just like the remote object, this leaves the client with the impression that the server object is in the client's process
When a Client object calls a method on the server object, the proxy passes the call information to the remoting Framework on the client. This remoting System (Remoting Framework) in turn sends the call over the channel to the remoting System on the server
The Remoting system on the server receives the call information and on the basis of it, it invokes the method on the actual object on the server creating object if necessary
Then the remoting system on the server collects all the results of the invocation and passes through the channel to the remoting System on the client.
The remoting System on the client receives the response of the server and passes the results to the client object through the proxy
The process of Packaging and unpacking and sending the method calls across the different application domains via serialization and deserialization is called as Marshalling.

Note: Remotable objects are the objects that can be marshaled across different platforms. All other are object are nonremotable.

They are basically two types of remotable objects
Marshall-By-Value (MBV): Objects are copied and passed over the server application domain to the client application domain
Marshall-By-Reference (MBR): Objects are accessed on the client side by using a proxy. Client just holds the reference of this object which in on server-side.

Marshall-By-Value objects reside on the server. However when the client invokes a method of the MBV object, the MBV object is serialized, (by the Remoting Framework) and transferred over the network (using the channels & sinks) and restored on the client as an exact copy of the server-side object. The Method is then invoked directly on the Client. When this happens, the MBV object is no longer a remote object. Any method calls to the object do not require any proxy object or marshalling because the object is locally available.
So the MBV objects provide faster Performance by reducing the number of network round trips, but in the case of large objects the time taken to transfer the serialized object from the server to the client can be very significant. Further, MBV objects don’t allow you the flexibility to run the remote object on the server environment (That is you have to bring it to the client side)
A MBV object can be created by declaring a class with serializable attribute:

[Serializable()]
Public class MyMBVObject
{
// …
}

If a class needs to control its own serialization, it can do so by implementing the ISerializable interface as follows:

using System.Runtime.Serialization;

[Serializable()]
public class MyMBVObject : ISerializable
{
// … //Implement custom serialization here public void GetObjectData(SerializationInfo info, StreamingContext context) { //... } //...
}

Marshall-By-Reference objects are remote objects they always reside on the server and all the methods invoked on these objects are executed at the server side. The Client Communicates with the MBR objects on the server using the Local proxy object that holds reference to the MBR object.
Although the use of MBR object increases the network round trips, they are good choice when the objects are prohibitively very large or when the functionality of the object is only available on the sever environment on which it is created.
MBR object can be created by deriving from the Namespace System.MarshalByRefObject class

Public class MyMBRObject:MarshalByRefObject
{
// …
}

Remote Object Activation:
We have two types of remote objects MBV and MBR among these two only MBR objects can be activated remotely. No remote activation is needed in the case of MBV because the object itself is transferred to the client as explained earlier.

Remotable Members:
An MBR object can remote the following types of members
Non-Static public methods
Non-Static public properties
Non-Static public fields.
There are two types of activation modes an MBR object is classified to
Server Activated objects
Client Activated objects

Server Activated Objects (SAO)
SAO’s are those remote objects whose lifetime is directly controlled by the server. When a client requests an instance of a server-activated object, a proxy to the remote object is created in the clients. The remote application domain object is only instantiated (or activated) on the server side when the client calls a method in the proxy object.
Server activated object provide limited flexibility because they are only be instantiated using their default constructors (Parameter-less).
There are two possible activation modes for server activated objects
Single-call activation mode
Singleton activation mode

Single call activation Mode:
In the single call activation mode an object is instantiated for the sole purpose of responding to just one client request. After the request is fulfilled, the .Net remoting Framework deletes the object and reclaims the memory. Objects activated in single-call mode are also known as stateless because the objects are created and destroyed with each client requests, therefore they do not maintain state across requests.

Singleton Activation Mode:
In the Singleton activation mode at most (Minimum) there will be one instance of the remote object regardless of the no. of clients accessing it. A singleton mode object can maintain state information across method calls. For this reason such objects, are also sometimes known as stateful objects. The state maintained by the singleton-mode object is globally shared by all its clients.

Client Activate objects (CAO):
CAO are those remote objects whose Lifetime is directly controlled by the client. This is in direct contrast to SAO. Here the server and not the client have complete control over the lifetime of the objects. Client activated objects are instantiated on the server as soon as the client request the object to be created. Unlike as SAO a CAO doesn’t delay the object creation until the first method is called on the object. (In SAO the object is instantiated when the client calls the method on the object)

REVIEW BREAK
· .NET remoting enables objects in different application domains to talk to each other even when they are separated by applications, computers, or the network.
· The process of packaging and sending method calls among the objects across the application boundaries via serialization and deserialization is called marshaling.
· Marshal-by-value (MBV) and Marshal-by-reference (MBR) are the two types of remotable objects. MBV objects are copied to the client application domain from the server application domain, whereas only a reference to the MBR objects is maintained in the client application domain. A proxy object is created at the client side to interact with the MBR objects.
· A channel is an object that transports messages across remoting boundaries such as application domains, processes, and computers. The .NET Framework provides implementations for HTTP and TCP channels to allow communication of messages over the HTTP and TCPs, respectively.
· A channel has two end points. A channel at the receiving end, the server, listens for messages at a specified port number from a specific protocol, and a channel object at the sending end, the client, sends messages through the specified protocol at the specified port number.
· Formatters are the objects that are used to serialize and deserialize data into messages before they are transmitted over a channel. You can format the messages in SOAP or the binary format with the help of SoapFormatter and BinaryFormatter classes in the FCL.
· The default formatter for transporting messages to and from the remote objects for the HTTP channel is the SOAP formatter and for the TCP channel is the binary formatter.

Example: Step 1: Creating the Server Server.cs on Machine1
using System;


using System.IO;


using System.Runtime.Remoting;


using System.Runtime.Remoting.Channels;


using System.Runtime.Remoting.Channels.Http;


namespace Server


{

public class ServiceClass : MarshalByRefObject
{
public void AddMessage (String msg)

{

Console.WriteLine (msg);

}

}


public class ServerClass


{


public static void Main ()

{

HttpChannel c = new HttpChannel (1095);

ChannelServices.RegisterChannel (c); RemotingConfiguration.RegisterWellKnownServiceType(typeof(ServiceClass),
"ServiceClass",WellKnownObjectMode.Singleton);

Console.WriteLine ("Server ON at 1095");


Console.WriteLine ("Press enter to stop the server...");


Console.ReadLine ();


}


}


}

Save this file as Server.cs. Compile this file using
csc /r:system.runtime.remoting.dll /r:system.dll Server.cs

This will generate a executable Server.exe , run this file and on the console u should see Server ON at 1095 Press enter to stop the server... To check whether the HTTP channel is bonded to the port open the browser and type
http://localhost:1095/ServiceClass?WSDL You should see a XML file describing the Service.Step 2: Creating Client Proxy and Code on Machine2 Creating a client proxy requires to use a tool provided by Microsoft called soapsuds.exe This utility ready the XML description and generates a proxy assembly used to access the server. Go to a different machine and type in
soapsuds -url:http://:1095/ServiceClass?WSDL -oa:Server.dll

This will create a proxy called Server.dll which will be used to access the remote object Client Code TheClient.cs
using System;


using System.Runtime.Remoting;


using System.Runtime.Remoting.Channels;


using System.Runtime.Remoting.Channels.Http;


using Server;


public class TheClient


{


public static void Main (string[] args)


{


HttpChannel c = new HttpChannel (1077); ChannelServices.RegisterChannel (c);


ServiceClass sc = (ServiceClass) Activator.GetObject (typeof (ServiceClass),
"http://:1095/ServiceClass");


sc.AddMessage ("Hello From Client");


}


}

Save this file as TheClient.cs. Compile it using
csc /r:system.runtime.remoting.dll /r:system.dll /r:Server.dll TheClient.cs
The output will be TheClient.exe, run it and check the server console on Machine 1, you will see "Hello From Client". This example used HTTP Channel to transport messages to remote components; likewise TCP channel can also be used to achieve the same result.


I hope this much grounding should be enough for you to start exploring every nook and corner of .NET Remoting by yourself. Cheers...!!!

Connection Pooling in .NET

Connection Pooling Basics
Opening a database connection is a resource intensive and time consuming operation. Connection pooling increases the performance of Web applications by reusing active database connections instead of creating a new connection with every request. Connection pool manager maintains a pool of open database connections. When a new connection requests come in, the pool manager checks if the pool contains any unused connections and returns one if available. If all connections currently in the pool are busy and the maximum pool size has not been reached, the new connection is created and added to the pool. When the pool reaches its maximum size all new connection requests are being queued up until a connection in the pool becomes available or the connection attempt times out.
Connection pooling behaviour is controlled by the connection string parameters. The following are four parameters that control most of the connection pooling behaviour:
* Connect Timeout - controls the wait period in seconds when a new connection is requested, if this timeout expires, an exception will be thrown. Default is 15 seconds.
* Max Pool Size - specifies the maximum size of your connection pool. Default is 100. Most Web sites do not use more than 40 connections under the heaviest load but it depends on how long your database operations take to complete.
* Min Pool Size - initial number of connections that will be added to the pool upon its creation. Default is zero; however, you may chose to set this to a small number such as 5 if your application needs consistent response times even after it was idle for hours. In this case the first user requests won't have to wait for those database connections to establish.
* Pooling - controls if your connection pooling on or off. Default as you may've guessed is true. Read on to see when you may use Pooling=false setting.

Common Problems and Resolutions

Connection pooling problems are almost always caused by a "connection leak" - a condition where your application does not close its database connections correctly and consistently. When you "leak" connections, they remain open until the garbage collector (GC) closes them for you by calling their Dispose method. Unlike old ADO, ADO.NET requires you to manually close your database connections as soon as you're done with them. If you think of relying on connection objects to go out of scope, think again. It may take hours until GC collects them. In the mean time your app may be dead in the water, greeting your users or support personnel with something like this:

Exception: System.InvalidOperationException Message: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Source: System.Data at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction) at System.Data.SqlClient.SqlConnection.Open() ...
Exception: System.InvalidOperationException
Message: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Source: System.Data
at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction)
at System.Data.SqlClient.SqlConnection.Open()

Closing your connections

When you intend to close your database connection, you want to make sure that you are really closing it. The following code looks fine yet causes a connection leak:
SqlConnection conn = new SqlConnection(myConnectionString);
conn.Open();
doSomething();
conn.Close();

If doSomething() throws an exception - conn will never get explicitly closed. Here is how this can be corrected:
SqlConnection conn = new SqlConnection(myConnectionString);
try
{
conn.Open();
doSomething(conn);
}
finally
{
conn.Close();
}

or
using (SqlConnection conn = new SqlConnection(myConnectionString))
{
conn.Open();
doSomething(conn);
}

Did you notice that in the first example we called conn.Close() explicitly while in the second one we make the compiler generate an (implicit) call to conn.Dispose() immediately following the using block? The C# using block guarantees that the Dispose method is called on the subject of the using clause immediately after the block ends. Close and Dispose methods of Connection object are equivalent. Neither one gives you any specific advantages over the other.

When returning a connection from a class method - make sure you cache it locally and call its Close method. The following code will leak a connection:

OleDbCommand cmd new OleDbCommand(myUpdateQuery, getConnection());
intres = cmd.ExecuteNonQuery();
getConnection().Close(); // The connection returned from the first call to getConnection() is not being closed. Instead of closing your connection, this line creates a new one and tries to close it.

If you use SqlDataReader, OleDbDataReader, etc., close them. Even though closing the connection itself seems to do the trick, put in the extra effort to close your data reader objects explicitly when you use them.

Last but not the least, never Close or Dispose your connection or any other managed object in the class destructor or your Finalize method. This not only has no value in closing your connections but also interferes with the garbage collector and may cause errors.

Testing your changes
The only way to know the effect of your changes on connection pooling behavior is to load-test your application. If you have existing unit tests - use them. Running your unit tests repeatedly in a loop may create a fair bit of stress on application. If you don't, use the Web load testing tool. There are plenty of commercial load testing tools on the market. If you prefer freeware, consider OpenSTA available at www.opensta.org. All you need to setup your load test is to install the tool, bring up your Web application and click your way through. OpenSTA will record your HTTP requests into test scenarios that you can run as part of your load test.

Knowing that your application crashes under the load doesn't often help to locate the problem. If the app crashes fairly quickly, all you may need to do is run several load tests - one for each module and see which one has a problem. However, if it takes hours to crash you will have to take a closer look.

Monitoring connection pooling behaviour

Most of the times you just need to know if your application manages to stay within the size of its connection pool. If the load doesn't change, but the number of connections constantly creeps even after the initial "warm-up" period, you are most likely dealing with a connection leak. The easiest way to monitor the number of database connections is by using the Performance Monitor available under Administrative tools on most Windows installations. If you are running SQL Server, add SQL Server General Statistics -> User Connections performance counter (The counter is available on the SQL Server machine so you may need to put its name or IP address into the Select Counters From Computer box). The other way to monitor the number of database connections is by querying your DBMS. For example, on SQL Server run:

EXEC SP_WHO

Or on Oracle, run:

SELECT * FROM V$SESSION WHERE PROGRAM IS NOT NULL

.NET CLR Data performance counters

In documentation you may run into .Net CLR Data performance counters. They are great if you know what they can and cannot do. Keep in mind that they do not always reset properly. Another thing to keep in mind is that IIS unloads app domains under stress so don't be surprised when your number of database connections has dropped to zero while your min pool size is five!

Short term fixes

What if you discovered the connection pooling issue in production and you cannot take it offline to troubleshoot? Turn pooling off. Even though your app will take a performance hit, it shouldn't crash! Your memory footprint will also grow. What if it doesn't crash all that often, and you don't want to take a performance hit? Try this:

conn = new SqlConnection();
try
{
conn.ConnectionString = "integrated security=SSPI;SERVER=YOUR_SERVER;DATABASE=YOUR_DB_NAME;Min Pool Size=5;Max Pool Size=60;Connect Timeout=2;"; // Notice Connection Timeout set to only two seconds!
conn.Open();
}
catch(Exception)
{
if (conn.State != ConnectionState.Closed) conn.Close();
conn.ConnectionString = "integrated security=SSPI;SERVER=YOUR_SERVER;DATABASE=YOUR_DB_NAME;Pooling=false;Connect Timeout=45;";
conn.Open();

If I fail to open a pooled connection within two seconds, I am trying to open a non-pooled connection. This introduces a two second delay when no pooled connections are available, but if your connection leak doesn't show most of the time, this is a good steam valve.


I hope this will help you resolve some of your connection pooling issue. Enjoy coding...!!

ASP.NET Page Life Cycle

Page Execution Stages:
The first stage in the page life cycle is initialization. This is fired after the page's control tree has been successfully created. All the controls that are statically declared in the .aspx file will be initialized with the default values. Controls can use this event to initialize some of the settings that can be used throughout the lifetime of the incoming web request. Viewstate information will not be available at this stage.After initialization, page framework loads the view state for the page. Viewstate is a collection of name/value pairs, where control's and page itself store information that is persistent among web requests. It contains the state of the controls the last time the page was processed on the server. By overriding LoadViewState() method, component you can understand how viewstate is restored.
Once viewstate is restored, control will be updated with the client side changes. It loads the posted data values. The PostBackData event gives control a chance to update their state that reflects the state of the HTML element on the client.At the end of the posted data changes event, controls will be reflected with changes done on the client. At this point, load event is fired.Key event in the life cycle is when the server-side code associated with an event triggered on the client. When the user clicks on the button, the page posts back. Page framework calls the RaisePostBackEvent. This event looks up for the event handler and run the associated delegate. After PostBack event, page prepares for rendering. PreRender event is called. This is the place where user can do the update operations before the viewstate is stored and output is rendered. Next stage is saving view state, all the values of the controls will be saved to their own viewstate collection. The resultant viewstate is serialized, hashed, base24 encoded and associated with the _viewstate hidden field.Next the render method is called. This method takes the HtmlWriter object and uses it to accumulate all HTML text to be generated for the control. For each control the page calls the render method and caches the HTML output. The rendering mechanism for the control can be altered by overriding this render method. The final stage of the life cycle is unload event. This is called just before the page object is dismissed. In this event, you can release critical resources you have such as database connections, files, graphical objects etc. After this event browser receives the HTTP response packet and displays the page.
Hope this gives a much required insight into the sequence of events during the page life cycle. Enjoy coding...!!