Thursday, September 23, 2010

Exception handling in .Net


In Win32 API as well as COM, whenever an exceptional condition occurred during the execution of code, there was no explicit notification send to the caller to notify about the problem. Instead it was left to the caller to check if the call made was successful or not e.g. most Win 32 API functions return FALSE to indicate that something is wrong and then the caller needs to call GetLastError to find the details of the problem. And similarly in COM, if the first bit of HRESULT is 1 then the remaining bits would give you details of the violation. Thus it is up to the caller to make explicit checks to find out any violations and if the caller forgets to do so, the state of the application could be in-deterministic down the line.

.Net has changed the scenario around this. If any assumption is violated in a method, it throws an appropriate Exception which needs to be caught either in the calling method OR somewhere in the call stack. Else if there is no handler for the Exception, the CLR terminates the application rather than leaving it as it is with unpredictable results down the line.

Define Exception

How do you define Exception? Most common answer would be that Exception is an error or response to an error. That means if we are calling a method and the method throws an Exception, something went wrong in the execution of the method. To exemplify, let us say we have a method

Public Int Divide (int a, int b)
Return a/b;

Now if the caller of this method passed the value Zero or Null for b, the method would throw a DividebyZeroException (). Is this error due to some execution fault by the method OR incorrect value passed to it. Likewise CLR too throws some Exceptions like OutofMemoryException; StackOverflowException; etc which are not due to errors in code.

Now how would you define Exception? I found a definition by Jeffrey Richter to be very appealing. "Exception is defined as non conformance to an assumption implied by the programmatic interface". In above method, Divide(), it was assuming b to be non-zero and non-null and when the caller violated that assumption, the appropriate exception was thrown back to the caller.

Exception handling in .Net

Exception handling in .Net is done via try...catch and finally blocks of code i.e. place the code that you expect or anticipate an exception in Try block and Catch those Exceptions in Catch block for the necessary action you may wish to take on those exceptions.

Finally block contains code (mostly some sort of clean up code) that is executed irrespective whether the Exception occurs in try block or no.

Public Void SomeMethod()

FileStream fs = null;

try {

fs = new FileStream(pathname, FileMode.Open)



catch (FileLoadException fle) {

//put the code here to handle FileLoadException or any exception derived from FileLoadException


catch (FileNotFoundException fnfe) {

//put the code here to handle FileNotFoundException or any exception derived from FileNotFoundException


catch (IOException ioe) {

//put the code here to handle IOException or any exception derived from IOException


catch (Exception e) {

//put the code here to handle any CLS compliant Exception.


catch {

//put the code here to handle any Exception, whether CLS compliant or not.


finally () {….




The above shows a method which has a try block followed by few Catch blocks and one Finally block. Any Exception that originates in the Try block will be matched with the Catch blocks for the matching Exception or any of derivatives of the stated Exception. If not found, it goes to the next catch block and so on. After all the catch blocks are compared, if none caters to the Exception thrown then the Exception is passed to the next method up the call stack and the same procedure is repeated.

If any of the Catch blocks of the methods in the call stack can handle the thrown Exception, then all the Finally blocks from where the Exception was thrown up till the matching Catch block are executed and then the code in the matching Catch block filter is executed. Then the code in Finally block corresponding to the Catch block which handled the Exception is caught, is executed.

If the code in the Catch block does not throw/rethrow the exception and also no exception occurs in the code in the Finally block, the execution will fall to the code immediately after Finally block (or after Catch block, if there is no Finally block). Also if exception is thrown while executing code in Final block, it is treated as if thrown at end of the final block.

The last catch block does not specify any exception. It is meant for catching any exception, not catered by any of the catch blocks above it, not even catch(Exception e) block– which catches any CLS compliant exception. Generally this catch block is meant for catching any non-cls compliant exception, though there is no way of knowing what the exception was.

To know how exception handling impacts performance of the code, you can use PerfMon.exe or System Monitor ActiveX control that comes with Windows NT 4, Windows 2000, Windows XP, and the Windows .NET Server product family. Various exception related counters get installed when .Net Framework is installed.

Hierarchy of Catch Blocks

Catch blocks with specific exceptions need to be first, followed by the catch blocks with the generic exceptions. The reason for this is because if the catch block catch(Exception E) is the first one, then since every exception is derived from Exception class, this block would be the only one to catch all CLS compliant exceptions and none of the catch blocks below it would be ever executed.

Should all methods have Exception Handling?

Unfortunately it is a common practice to have catch blocks at end of most, if not all, methods.
Not only is this detrimental to performance but is also grossly incorrect and conceives the truth .

Exception handling should be put only in places where:

1. The exception can be handled by the code and efforts can be made to work around the

2. The exception needs to be wrapped into a more meaningful one and then re-thrown.
Example would be if a type provides facility of finding Phone Numbers to its users. Now if
the Phone numbers are being maintained in Files and if there is a FileException, it would
not be prudent to throw the file exception back to the user, as it may not make sense to
them. Better would be to wrap these exceptions in a more meaningful custom exception
and then re-throw it.

3. If a message is to be displayed to the user on occurrence of the Exception.

Just catching an Exception without any purpose and swallowing it would not only hurt the performance but also leave the application in an unpredictable state, as some assumption/condition failed and nothing has been done about it.

Also note that not all exceptions can be handled by the application. If the CLR finds that it is OutofMemory for its internal purpose, it will display a message on Console and just terminate the application. None of the handlers will be called. Similarly if the StackOverflowException occurs in the internals of CLR, this exception will not be caught by code and none of the Finally blocks will be executed, and the process would be killed. But if the StackOverflowException occurs in the code of the application, this exception can be caught by the application but the code in the Finally blocks will not execute as there is no space to execute the Finally block on the Stack.

StackTrace in Exceptions

The Exception class from which all exception classes inherit has a public read-only property called StackTrace. Accessing this property actually access code in CLR; the property doesn’t simply return a string. If you create a Custom Exception derived from Exception and try to access this property, you will get NULL. When an Exception is thrown CLR records the point where the exception was thrown. When a catch filter accepts the thrown Exception, CLR records where the exception was caught. And now inside the catch filter if exception’s StackTrace property is accessed, the code implementing this property calls into the CLR which using the recorded start and end points of the thrown exception builds a string listing all the methods between the place where the exception was thrown to the place where it was caught.

If an exception is thrown i.e.throw e, the exceptions StackTrace is reset But if the exception is re –thrown i.e. throw e; the StackTrace property is not reset.

The StackTrace property only includes Method Names till the point where the exception is caught by the catch filter; none of the above methods in the call stack from the point where the exception is caught are included in the StackTrace property. To include the methods from the call stack above the catch point, use System.Environment.StackTrace() (static method) and merge the two strings.

Sometimes not all the methods of the call stack appear in the StackTrace property. This is because the JIT compiler may optimize and inline some of the methods to avoid the overhead of calling and returning from a separate method. Many compilers offer a /debug command line switch, which when turned ON makes the compiler embed information in the assembly that tell the JIT compilers not to inline the method so that the stack trace are more complete and useful to the developer debugging the application.

Applying the attribute System.Runtime.CompilerServices.MethodImplAttribute on top of the method forbids the JIT complier from in lining the method for both debug and release builds.

Exception Hierarchy and Custom Exceptions

All CLR compliant exceptions inherit from the class Exceptions. Initially Microsoft was advocating the strategy that all System Exceptions would inherit from the class System.Exception while all application Exception should inherit from ApplicationException. And both these exceptions would inherit from the base class Exception.
But in the course of building the FCL (Framework Class Library) Microsoft violated their own strategy, some reflection-related exception types are derived from ApplicationException instead of SystemException.

Checking against SystemException or ApplicationException may not be very practical. At the same time checking for individual exceptions too may be impractical. This where the Exception hierarchy kicks in and may be more useful. For e.g. ArgumentNullException; ArgumentOutOfRangeException; DuplicateWaitObjectException; all these inherit from ArguementException.

Checking against the ArguementException rather than each of the derived exceptions may be more helpful.

To define a custom Exception you would either need to inherit from Exception base class OR take any of the existing exception classes in the hierarchy and derive the custom exception from it. Whether to choose Exception class or one of the existing classes as base, depends entirely upon the policy and decisions of the Exception design of the application. If you derive a new Exception from ArguementException, then all the places in the code where ArguementException is being caught may need to consider handling this new exception type, unless you have thought and designed/coded with this consideration.

Exception class has 3 constructors, Blank; Accepting a string (which sets a descriptor) ; Accepting a string and an inner exception. In cases where the thrown exception is wrapped by a new or custom exception, the inner exception of this new exception class is set to the original thrown exception.

A custom exception can have its own data fields too besides the one provided by Exception class. But the caveat to watch here is that in case the Exception needs to be Serialized, serialization code for such additional fields in the custom exception classes needs to be coded. To provide serialize facilities to such custom exceptions class, annotate the class with Serialize attribute and also inherit the class from ISerialize interface and implement the methods for Serialization and Deserialization().

Unhandled Exceptions (AppDomain)

Exceptions which are not handled by the method propagate all the way up. These exceptions may be CLS compliant i.e. derived from Exception class or may be non-CLS compliant. These exceptions can be handled by attaching an event to AppDomain as shown below

AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(UnhandledExceptionCallbackMethod);

This callback receives a System.UnhandledExceptionEventArgs object which has two public read-only properties: ExceptionObject (System.Object) and IsTerminating (type System.Boolean). Check Exception.IsTerminating to know whether CLR is going to kill the Appdomain.

Normally, for manual threads, pool threads, and the finalizer thread, the CLR swallows any unhandled exceptions and either kills the thread, returns the thread to the pool, or moves on to call the Finalize method of the next object. If an unhandled exception occurs in any of these kinds of threads, the IsTerminating property will be false.

But if an application’s main thread or an unmanaged thread has an unhandled exception, IsTerminating will be true.

There is also Registry Entry whose value influences the handling of these Unhandled Exceptions. The registry entry is
HKEY_LOCAL_MACHINE\Software\Microsoft\.NETFramework\ DbgJITDebugLaunchSetting.

Value 0: Displays a dialog box asking the user whether he would like to debug the process.

Value 1: No dialog box is displayed to the user and CLR fires the AppDomain’s UnhandledException event.

Value 2: No dialog box is displayed to the user and AppDomain’s UnhandledException event doesn’t fire. The CLR just spawns the debugger attaching it to the application.

Unhandled Exception (Winforms)

To handle unhandled exceptions in Winforms define a method that matches the System.Threading.ThreadExceptionEventHandler delegate and register it with the Application type’s static ThreadException event.

Windows Forms deals only with CLS-compliant exceptions; non-CLS-compliant exceptions continue to propagate outside the thread’s message loop and up the call stack. To display or log both CLS-compliant and non-CLS-compliant exceptions, define two callback methods and register one with the Application type’s ThreadException event and register the other with AppDomain type’s UnhandledException event.

Unhandled Exceptions (ASP.Net)

Unhandled exception in ASP.Net can be handled for a particular Web page or for all Webpages.

To register a callback method that will receive notifications for unhandled exceptions for each WebPage, register the callback method using the Error event offered by the System.Web.UI.TemplateControl class; this class is the base class of the System.Web.UI.Page and System.Web.UI.UserControl classes.

To register a callback method that will receive notifications of unhandled exceptions on any page, register the application-wide callback method using the Error event offered by the System.Web.HTTPApplication class in the Global.asax file.

Unhandled Exceptions (WebServices)

ASP.NET catches the exception and throws a new SoapException object (System.Web.Services.Protocols). A SoapException object is serialized into XML, representing a SOAP fault. This SOAP fault XML can be parsed and understood by any machine acting as an XML Web service client. This allows for XML Web service client/server interoperability.

Tuesday, September 21, 2010

Automatic Memory Management - Garbage Collection


Memory management is essentially a topic that requires objects to be clean/cleared when their usage is complete. And obviously freed memory should not be referred.

But this seemingly simple task has been the major source of programming errors. Programmers either forget to free memory when it is no longer needed OR use memory that has already been freed up. Numerous tools have been designed to help programmers deal with these kinds of issues. All of this though helpful but still require additional efforts to solve these issues rather than focusing on the real problem. This is where Garbage Collection comes into play: It totally abstracts the programmers from worrying about freeing memory.

Application programmers create object/types and use them in their programs. And when the object/type goes out of scope, that is, it is not reachable by the application; it will automatically be collected and recycled by Garbage Collector, at the appropriate time.

So why is there no Garbage Collector for C++?

Because in C++ object pointer can be type caste to any other object, hence making it impossible to determine what is the object that the object pointer is pointing to. If the object that the object pointer is pointing to cannot be determined, how can garbage collection be done in C++ environment!

Working of Garbage Collector:

CLR (Common Language Runtime) mandates that resources be allocated from a heap, called managed heap. (Primitive types like int, string, etc are allocated on stack and not heap). The managed heap maintains a distinct pointer called NextPtr, from where the next object is allocated. And then the Nextptr is advanced by the number of bytes of the newly allocated object. This mechanism ensures that consecutive objects are contiguous in memory, which gives performance gains due to locality of reference.

In C, to allocate a resource/object, a linked list would needed to be traversed to find an appropriate memory chunk and allocate it and make the necessary changes in the linked list. Not only this required additional time but there was no guarantee that consecutive objects would be continuous in memory.

Thus managed heap is superior to C-runtime heap, in ways described above.

But, managed heap would need to reclaim memory as it cannot just keep on allocating memory infinitely. In order to reclaim memory, garbage collector needs to keep tracks of all resources/objects on heap that are not reachable i.e. It is no longer being used by the application and then reclaim the memory for these objects. Keeping track of the reach ability of the objects is done via a mechanism called roots, where the JIT compiler maintains besides each method offset the list of objects associated with it. Thus with the help of these method roots as well as global variables, GC can walk through the stack trace and find out the list of all reachable objects.

For the Garbage Collector (GC) to start there needs to be some kind of a threshold to start it. Let’s say at a particular point, the allocated memory on the managed heap reaches a threshold and starts the GC. The GC will look at the objects in the Heap (not all objects but based upon generations, which we will talk later) and identifies the objects that are not being used by the application. (it determines this by looking at the roots for all the methods on the stack trace to find out which objects are being used and which are not). Then the objects that are not being used are garbage collected i.e. their memory is reclaimed and if there is a need to compress the Heap, a decision that GC takes based on fragmentation of memory, it compresses the heap by removing unreachable objects and shifting the addresses of the other objects to make the heap continuous, as shown below. If the objects are moved, the GC obviously corrects all the references to the object, to point correctly.


If Garbage collector has to scan the entire heap i.e. all the allocated objects in heap, it would be very time consuming and inefficient too. Hence the garbage collector works on the assumption that newly allocated objects have shorter lifetime compared to older objects on heap. Numerous studies have been done to validate this assumption.

When an application starts, it is allocated a managed heap. Objects are allocated memory on this heap by the application tread(s). Up till now all the objects on the heap are termed as Generation 0. If the memory threshold defined for Generation 0 triggers in (which may when the Generation 0 reaches the size of 512KB), the GC will start scanning the objects in the managed heap to check if they are reachable by the application. All the objects that are reachable will be left as they are, and those that are not reachable will be takenoff/ cleaned up from managed heap by GC. If any compression needs to be done, the GC takes the call and moves the object in the heap so that they are continuous and the NextPtr points after all the object in heap. GC takes care to reset the object references wherever required, if the objects have been moved in the Heap.

Now the objects which were not garbage collected become Generation 1. And henceforth all the objects that would be allocated memory on heap, form a part of Generation 0, which will be blank currently. Again if the threshold on Generation 0 is reached, the GC will kick in to do the cleaning activity for the managed Heap.

What about objects in Generation 1? Well, there is a threshold for Generation 1 too, i.e. let us say that when Generation 1 reaches the size of 1 MB which may the defined Threshold limit for it, the GC will kick off looking into Generation 1 to find non-reachable objects. And the objects which survive Generation 1, will be promoted up to Generation 2 and likewise surviving objects from Generation 0, will form part of Generation1.

That is the generation theory of GC, where objects from Generation 0, i.e. newer objects are always scanned first for garbage collection, and then generation 1 and at last generation 2 is scanned. There is no further generation than generation 2.

Generations and Thread Hijacking and Suspension

Before GC can run for collecting memory, it will need to suspend all the existing threads running managed code till the GC completes its activities and resets any object pointers if required due to the movement of objects. After GC completes its activities, the threads resume their activities from the point onwards where they were suspended. But GC does not just suspend a thread just like that. It checks whether the Inst Ptr of the thread is at the offset address of the method specified in the method table produced when the IL code is JIT compiled. If yes, the thread can be easily suspended as it is at a safe point.

If not, the thread is hijacked and the return address points to a function implemented inside the CLR. The thread is then resumed with the hope that when the currently executing method returns, it will execute the special function inside CLR and thus suspend the thread. But the thread may not return for quite some time. The CLR waits for 250 milliseconds and if the method does not finish (and enter into the special method implemented inside CLR), the thread is suspended again and hijacked i.e. the stack is modified so that the current method’s return address is into the special method inside CLR for suspending the thread. The thread again resumes till it enters into the CLR method for suspension OR 250 milliseconds expire. The process continues till all threads are suspended and then the GC kicks in.


In a multiprocessor environment, the managed heap is partitioned into multiple memory arenas, one for each thread so that exclusive access to single managed heap is not required by various threads. The server version of the execution engine (MSCorSvr.dll), in the multiprocessor environment, initiates the garbage collection per thread per CPU thus allowing parallel garbage collection for each memory arena.

Concurrent collections

In a multiprocessor environment, a concurrent garbage collector thread can be initiated, of normal priority, which works in the background while the application thread runs, and it builds a graph of unreachable objects. Thus when the garbage collection actually takes place, since the graph of unreachable objects is already build, the GC process takes less time. For more interactive GUI applications, concurrent collection may be a good option.

Concurrent option needs to be turned ON in the configuration file via the attribute

Large Objects

Objects larger than 85,000 bytes are allocated in special part of the managed heap and are treated as Generation 2, simply because these objects are heavy and taking them off and compacting 85,000 bytes from the heap each time these objects are garbage collected will waste too much time. So in case these heavy objects are short lived and used frequently, these may cause generation 2 to be collected more frequently and this hurt the performance.

In case your application does have large objects that are collected frequently, try and see if they can be broken or composed of smaller objects such that only few of them need to be collected frequently OR try and circumvent the situation of having to collect these large objects frequently.

Finalize and Dispose

Most of the object types manipulate bytes in memory and hence their memory is reclaimed when they are garbage collected by the garbage collector. So why do we need a finalize method on object types and what does it do?

Finalize method is called when the object is actually garbage collected by the garbage collector. This is required when the memory of the type cannot be reclaimed by the garbage collector itself i.e. when the object/type uses an unmanaged resource like Filehandle, Mutex Kernel object, etc. for such types, the Finalize method is required to execute code to free the unmanaged resources which cannot be claimed by the GC.

The usual and the good practice is to encapsulate these unmanaged resources within a managed object type, for e.g.: the FileStream type encapsulates the windows FileHandle and exposes methods to do various operations on the unmanaged resource like Create, Read, Write, etc. When object of FileStream type becomes unreachable by the application, it is garbage collected and the finalize method on it is automatically called by the GC. The code in the finalize method then does the needful to release the unmanaged resource.

When exactly is the finalization method called is not deterministic, as it based upon when GC kicks in. So if we need to release the unmanaged resource deterministically, we can do so by implementing the Dispose pattern/interface. The dispose method is a public method. It first suppresses the calling of Finalize method by GC and then it calls a Boolean Dispose method, which is protected and virtual. This method synchronizes thread access by placing all code inside the code construct Lock() {…} and it releases/closes the unmanaged resource.

The Boolean Dispose method is called from the Dispose method as well as from Finalize method with the Boolean value indicating whether it is called from Dispose method OR Finalization method. Why? Because if the Boolean Dispose method is called from Finalize(), it is not advisable to access any other managed object. (there is no order in which the Finalize methods are called by GC i.e. an inner/contained object’s Finalize method may be called earlier than the containing object).

When the Boolean Dispose method is called from the Dispose method, it is free to call any managed object, and this indicated by passing true. Code below will clarify more.


Public Void Dispose()







Protected Virtual Void Dispose (Boolean Disposing)


Lock (this)


If (Disposing)

{…. Can Access other Objects….


If (Handle Valid)

{ Close(Handle)

Handle = Invalid





Finalize TypeName()





If a new type derives from this type which implements the dispose pattern, all it would need to do is override the Boolean Dispose method and provide it’s own implementation of clean up activity and ultimately call the base class’s Boolean Dispose method. And if this new derived type does not need to do any clean up, it need not override the Boolean Dispose method.

Finalization method and GC

For a type that implements Finalization, as soon as it is allocated in heap, GC marks that this type has a finalization method and stores a reference to such objects. When this object becomes unreachable, GC takes this object and places a reference to it in the FreeReachable Queue. At this point there is a reference to this object and technically it cannot be garbage collected. CLR spawns separate threads, other than the application threads, to run the finalize methods of these objects in FreeReachable Queue. And as the Finalize method is RUN, the entry for that object is taken off from the queue making it completely unreachable and can be garbage collected by the GC in its next run. (Please mark that the order in which the Finalize methods of the objects would run is not deterministic and hence any use of managed objects within the Finalize methods could led the application to in-deterministic state).

Thus an object having a Finalize method requires two GC turns for it to be garbage collected. Hence unless the type is using an unmanaged resource, it is strongly recommended not to use Finalize method.

Also a dispose method allows the clean up to happen before the Finalize method is called. This means that the type will not be used anymore down the line; something which is impossible to enforce. Hence it is advisable not to implement the Dispose interface, as when the type is unreachable GC will automatically call the Finalize method and the desired clean up would be done. But if the Dispose interface needs to be implemented for some reason, then before using the TYPE it should be checked whether the object of the TYPE is still Alive or has it been Cleaned up.


In cases when large objects are allocated but used sparingly, then instead of holding a strong reference to the object, a weak reference to the object can he held. A weak reference allows the object to be garbage collected, if there is a need for memory. Else the weak references are kept intact and the code, if required, can use the object in weak reference.

The code below demonstrates how to hold a weak reference. But please bear in mind that for weak reference to work, there should be no strong reference to that object anywhere in the stack trace. Else the weak reference would not work

WeakReference wkRef;

Protected Void WeakReference()


StrongRefObject StrngObj = new StrongRefObject();

wkRef = new WeakReference(StrngObj);

StrngObj = null; // Remove strong reference to the object


Protected Void RestoreStrongReference()


StrongRefObject StrngObj = wkref.Target;

If (StrngObj == null)

/// the object has been garbage collected


/// the object has not been garbage collected and can be used


Resurrection and Object Pooling

Resurrection means bring back to life. When an object’s Finalize method is run, it means that the object is dead, there are no references to it and its memory will be reclaimed in the next cycle of GC. But what if the object in it’s Finalize method hooks itself to some global static variable. In that case, the object cannot be garbage collected as it is being referenced by the application. That means it is back to life, resurrection. And also you could register with GC to run the Finalize method for this object, if it is garbage collected again with the help of following code: GC.ReRegisterForFinalize(this);

When would you want to implement Resurrection?

Object Pooling. If there is a class of object that you would like to POOL, you could use Resurrection. First, create a pool of that object type. Then when an object from the pool is allocated, it’s reference from the pool structure holding the object collection is removed. When the object’s Finalize method runs, it cleans the object and reattaches itself to the pool structure and also re-register’s, its Finalize method with GC. Thus the object is cleaned and available for

Programmatic Control of Garbage Collector

You can force garbage collection by calling out any of the 2 static methods; though it is best to the let the GC run on its own accord as it also fine-tunes the generation threshold value/trigger based on the application behavior.


GC.Collect(Int32 GenenerationNumbertoCollectFrom); // Generation 2 will cause Gen 1 and Gen 0 to be collected too.

GC.WaitForPendingFinalizers(); //This method suspends the calling thread, till the FreeReachable queue is empty.

Int32 GetGeneration(Object obj);// Gives you which generation the object is currently in.

Monitor Garbage Collection

.Net Framework installs a number of performance counters that gives real time statistics about the CLR’s operations. These statistics can be viewed via the PerfMon.exe. Various GC related counters can be viewed and monitored.


The above should provide some insight into memory management and garbage collection in .Net. It also gives pointers on using Finalize and Dispose methods and their impact on performance; also where are Large objects allocated in heap and their impact on performance; and finally multiprocessor and their use by GC and how can concurrent collection be done in multiprocessor environment.


1. Applied Microsoft .Net Framework Programming – Jeffrey Ritcher; this is by far the best resource that I have come across on this subject.

2. CLR via C#, 2nd Edition – Jeffrey Ritcher. (I have not read this book but still including it because i have every reason to believe it must be a good read and add a lot of value on this topic in more detail).

Thursday, August 26, 2010

Isolation levels in Transactions

Recently I was revisiting my understandings on the Isolation levels in transaction, a topic which I had read long time back and had started to evaporate in my memory.

Read a couple of very interesting articles on the subject (references below) and have summarized my understandings on the subject in this article.

Need for Isolation levels in transactions

A transaction is a series of actions that need to be executed in entirety and the obvious reason for this is data consistency.

Every transaction acquires some resources to do some work on them. Now based upon the need of the actions in the transactions, the desired resources can be acquired either exclusively for the transaction, thus blocking any other transaction that may need the same resources OR if the resources are acquired in sharing mode then other transactions can also use the same resources thus increasing the concurrency but potentially risking data(resource) consistency.

Isolation Levels

Traditionally there are 4 isolation levels but since SQL Server 2005, two more have been added, which are more optimistic in nature (the traditional 4 are labeled as pessimistic in nature).

Read Uncommitted

On specifying this Isolation level for a transaction, the transaction reads data which may have been modified by another transaction but has not been committed yet. This could lead to dirty reads, in case the transaction modifying the data rolls back the update.

Not sure when exactly this type of Isolation level is used but it is there.

Read Committed

This is the default isolation level for all transactions in SQL SERVER (atleast till 2005).

This isolation level ensures that there are no dirty reads i.e. if a record/data is being updated by another transaction then the Transaction with Read Committed will be blocked till the previous transaction commits the data.

The advantages with this Isolation level are that there are no dirty reads and the concurrency high, since once the data is read lock is released and other transactions can use the same resource/data.

The disadvantageous are that since only shared locks are obtained while reading the data and which too are released once the data is read, there can be lost updates i.e. Tx1 can read a data and after some time update it. And in meantime if Tx2 too reads the same data and updates it then only the last update will be retained, others would be lost.

Read Repeatable

We can solve the problem of lost updates and make our reads repeatable with Read Repeatable Isolation level.

This Isolation level ensures that the data read in a transaction with this Isolation level cannot be changed until the transaction is complete.

Voila! Seems like all issues resolved.. not really. Well updates will be restricted on the data being referred or used by this transaction but Inserts matching the criteria of the data being used in the tx can still happen. This is called as Phantom reads.


This is the stringiest of all the Isolation levels i.e. Phantom reads too will not occur; no Inserts matching the criteria of the data being used in the tx with this Isolation level can happen.

Yes all issues in above 3 txs are resolved and consistency is ensured fully but a price of very low concurrency.

Read Committed Snapshot (Since SQL Server 2005)

This and the below one are 2 new Isolation levels introduced in SQL SERVER 2005 and more optimistic in nature, with the goal of maximizing concurrency along with data consistency. These 2 Isolation levels use a technique called as Row Versioning.

Read Committed Snapshot Isolation level allows a transaction to read the original value of a record/data that may have been updated by another txn but not yet committed. But if Txn2 ties to read the values, which have been updated by Txn1, it (Txn2) will be shown the updated values.

In fact the Default Isolation Level has been changed from Read Committed to Read Committed Snapshot.

Snapshot (Since SQL Server 2005)

This Isolation level is same as Read Committed Snapshot except with the difference that Txn2 will see the original copy of the data throughout its life, even if midway Txn1 updates and commits it.

Even if inserts are made matching the criteria of data being read in TXN2, they will never be visible to TXN2 ever.

This ISOLATION LEVEL also prevents lost updates i.e. if a txn2 reads the original value, which is being updated by txn1 and later if txn2 tries to update the values that have been modified by txn1, an error will be thrown to txn2.


In order to operate at the desired Isolation levels, the transactions will need to acquire certain locks. Below are various types of locks that are generally acquired:

Sharing: This type of lock is acquired generally during reads. Multiple Shared locks can be acquired by various transactions. And during the time these locks are ON, exclusive locks would have to wait till all shared locks are released.

Exclusive: This lock is generally used during updates and no other locks are allowed on the data holding this lock, until it is released.

Intent: This lock specifies that there is an intent to acquire Exclusive lock on a data already having Shared lock and thus disallowing any other further Shared locks, till the Exclusive lock is obtained on the data.

Schema: This lock is used by the DB and compiling queries to ensure that the Schema is not modified when these locks are ON.

Dirty Reads

Nonrepeatable reads

Phantom reads

Concurrency model

Conflict Detection

Read Uncommitted






Read Committed






Repeatable Read


















Read Committed Snapshot