Tuesday, 30 September 2008

Unit Testing Linq To Entities

We have decided that we're going to use Linq to Entities and the Entity Framework Model in our new project. This means that for our model we are going to have to create unit tests that mimic the database environment. We don't want our unit tests to connect to the database itself because that will make the unit tests unnecessarily lengthy and put the developers in a position where they will avoid doing the unit tests where possible. Also when doing unit testing against the database you must make sure that the data int he database is correct before doing the unit test. The solution is to create your own data in code or with XML that you can test against reliably. I came across this blog by Ian Cooper that goes trough a fantastic way to create your own data in exactly this fashion. If you're using Linq to Entities or Linq to SQL and you are trying to follow TDD, I highly recommend reading this blog.

Friday, 26 September 2008

ASP.NET Membership - Custom Role Providers

With my investigations into the Enterprise Library I've come across the sercurity section. Before I got into security I thought I'd take a more indepth look into memebership services. For my new project there are some requirements on the database, some legacy table structures need to be kept in place, some of these hold the groups, roles and permissions for users. ASP.NET membership is very powerful and does most of what I need, that is to say we can use active directory authentication for the users. The available roles provides, however, to determine the roles for the users don't do what I need. The solution? Simply overwrite the roles provider with my own custom roles provider. After a little research I came across this gem, Josh Flanagan's Roles Provider Template. Just download and install the template as instructed and you'll be able to add new role providers to your code in no time. The template compiles without any extra code and implements no methods. Just fill out the implementations for the methods you require from your role provider and throw a not supported exception from everywhere else. Now all you need to do is include your new provider in your config file, there are a couple of gotchas here so I'll show you mine.

The above xml is placed in between the system.web section as normal for membership. The active directory provider is first, you should be aware that membership does not allow active directory connection strings that are serverless, so mine ended up more like this

LDAP://domain.com.au/DC=domain,DC=com,DC=au.

If you're having trouble, try that and see if it works for you. Unfortunately I don't have a good LDAP site I can recommend for you, if you know one please let me know. The role manager is where I've implemented my custom provider (though it has no code in it yet). Note that in the "type" section I have this:

ASPMemberShipTechDemo.Models.Security.CustomDatabaseProvider, ASPMemberShipTechDemo .

You will need to put the assembly after the class name so that the config file knows where to look. This is more important if you're dealing with another library. After that, firing up the code and it all works fine. I can now run

"Roles.Provider.GetRolesForUser(username)"

and have it return nothing (because I haven't implemented my custom code yet) but hit the breakpoint in my custom provider. Full credit to Josh Flanagan and his custom roles provider template.

Wednesday, 24 September 2008

Microsoft Enterprise Library #3 – Validation Block

Apparently the Validation block is new, as I've never used the library before it's no newer to me than any of the others. I did find that there were fewer examples of the more difficult stuff on the web for me to find, hopefully I can provide one or two here. I had my reservations about the validation block before I started this entry, I'm curious to see if they were valid. Data validation is very important to our new project here, the validation block appears at a first glance to fill the need, but we don't want to create a maintenance nightmare between the database changes and the code changes. Adjusting the length of a field could cause us no end of trouble when validating, unless any framework we use can check the database for the field length.

My Requirements

This time my requirements are a little more difficult to fill. I'm getting a good handle on how to do things with the enterprise library so I'm confident that I'll be able to get over any problems I encounter. 1. Validate data formats (like email addresses) 2. Validate fields against the database (strings against length for example) 3. Validate my POCs (Plain Old C# Objects) to keep a strong level of abstraction

Step 1 - Data Format Validation

Seems like the easiest to work with to start. I can also fill requirement 3 while I'm at it. I've created a class called Employees and added it to my project. The class is as so:

public class Employee
{
 public string Name { get; set; }
 public int Age { get; set; }
 public string EmailAddress { get; set; }
}

Pretty straight forward. I want to validate the EmailAddress field. Now this is a POC object so if I can do this then I'm filling requirement 3 also. Make sure your class is public or the configuration tool won't be able to see it. First thing to do is add a Validation Application Block to the App.config file with the enterprise library configuration tool. Once added, right click the application block and add a new type. You'll have to load the assembly for your project to see your public class, in my case I selected the exe file and there was my Employee class. Some people have noted a problem that the class didn't show until restarting, I didn't get this problem. Just make sure you've built your application first. Once the type is there add a new Rule Set.

Make sure you set this ruleset as the default ruleset, or you'll feel like a fool when you see that answer after going to Google to find out why it doesn't work. Trust me, I know. Next, right click the ruleset and under new is the option to choose members. I want to validate the email address so I chose that member from the list. For an email address I want to use a regular expression to validate that it is correct, so I right click the field I just added and add a new regular expression validator.

Lucky for me there is a pattern for email addresses so I don't have to devise my own. I set the message template to something useful and my settings ended up looking something like this:

 

Ok, now to test it. I added the assembly for validation as I did for logging and exceptions (this time Microsoft.Practices.EnterpriseLibrary.Validation and Microsoft.Practices.EnterpriseLibrary.Validation.Configuration). The following code shows how to test it.

Employee myEmployee = new Employee();
myEmployee.EmailAddress = "memine.net";
ValidationResults results = Validation.Validate(myEmployee);
if (!results.IsValid)
{
foreach (ValidationResult result in results)
{
Console.WriteLine(result.Message);
}
}
 

When I run this, the address memine.net will fail and write the error out to the console "Email Address Invalid". If I change the EmailAddress to me@mine.com it comes back valid. That was remarkably simple, I now have email address validation on my Employee object and I'm using POCs to do it. Requirement 1 and 3 satisfied.

Step 2 - Validating Against the Database

Ok so at first this was my main reservation with the validation block. There is no default validator to validate a POC field against it's corresponding database field. I knew I'd have to write my own. I figured it was going to be hard, and I wasn't dissappointed. There are a few gotcha's, and hopefully this post might help someone figure out theirs faster than it took me. I decided that the employee name was a good field to create a custom validator for. So I added the field to the employee type in the app.config using the Enterprise Library Config tool and added a custom validator to the field. When you add a custom validator you need to choose an object to validate with, upon loading my assembly it specified that no objects that inherit from Validator were found, so I had a starting point. A little code inspection led me to this:

namespace Microsoft.Practices.EnterpriseLibrary.Validation
{
public abstract class Validator
{
 protected Validator(string messageTemplate, string tag);
 
 protected abstract string DefaultMessageTemplate { get; }
 public string MessageTemplate { get; set; }
 public string Tag { get; set; }
 
 protected internal abstract void DoValidate(object objectToValidate, object currentTarget, string key, ValidationResults validationResults);
 protected virtual string GetMessage(object objectToValidate, string key);
 protected void LogValidationResult(ValidationResults validationResults, string message, object target, string key);
 protected void LogValidationResult(ValidationResults validationResults, string message, object target, string key, IEnumerable nestedValidationResults);
 public void Validate(object target, ValidationResults validationResults);
}
}

So I was going to have to override the DoValidate method and the DefaultMessageTemplate property. Didn't seem so hard. I created my class:

public class DatabaseValidator : Validator
{
protected DatabaseValidator(string messageTemplate, string tag)
: base(null, null) { }
 
protected override string DefaultMessageTemplate
{
get
{
return "";
}
}
 
protected override void DoValidate(object objectToValidate, object
currentTarget, string key, ValidationResults validationResults)
{
}
}
 
And compiled. No Errors, fantastic. But when I tried to load the assembly to add the validator to the Name field as a type it didn't show. After some searching I found I needed to set a configuration element type on the class as so:

[ConfigurationElementType(typeof(CustomValidatorData))]
public class DatabaseValidator : Validator
{
...
{

After this it would show up just fine. Great I thought, I'm almsot there. I added the new validator type to the Name field (using custom validator and selecting the assembly exe file) and compiled. No errors, but when running the application, BANG:

Additional information: Constructor on type 'ApplicaitonBlocksTechDemo.Validators.DatabaseValidator' not found told me a little, my constructor has the wrong parameters. It took me a while to find the solution, but eventually I did. Your validator class will need a constructor that takes a NameValueCollection as a parameter. I added this in and now my class looks like so (I also added in a default constructor to be safe).

[ConfigurationElementType(typeof(CustomValidatorData))]
public class DatabaseValidator : Validator
{
public DatabaseValidator(NameValueCollection collection)
  : base(null, null) { }
public DatabaseValidator()
  : base(null, null) { }
protected DatabaseValidator(string messageTemplate, string tag)
  : base(null, null) { }
 
protected override string DefaultMessageTemplate
{
  get
  {
    return "";
  }
}
 
protected override void DoValidate(object objectToValidate,
  object currentTarget, string key, ValidationResults
  validationResults)
{
}
}
 

And recompiled. Success! The validator returned, and without any code the validation was considered successful. To implement the validator all you need to do is fill out the DoValidate method with whatever validation you require, in this case I'll tell it to go to the database, from the key (the field name) and the target (class name) I'll be able to discern the field and thus the limitations in the database to return correct validation.

Final Notes

With the success of the custom validator I'm actually quite excited to use the validation block in our new application, I can see the amount of code I can avoid having to write myself by using this tool, especially if we use ASP.NET MVC (the way it binds objects will lend itself to this very nicely). I hope this post helps someone.

Tuesday, 23 September 2008

Microsoft Enterprise Library #2 – Exception Application Block

This is the second in a series of blogs looking at the Enterprise Library for development. In the last session I looked at the logging block, this session I'm going to look at the Exception block. There is no coincidence about the order that I'm doing these in, the Exception block has an optional dependency to the logging block that I'm going to exploit for the purposes of filling my requirements of the Exception block. For this blog post I'll be re-using some of the setup from the previous post for this reason.

My Requirements

My requirements for exception handling are far more simple than my logging requirements. I want to:

  1. Log all exceptions to the event log
  2. Email security exceptions to an email address
  3. Rethrow database exceptions to the application to handle at a higher level (so the user will see them)

Step 1 - Logging to the Event Log

I'm going to re-use my logging block for this, previously I've setup that all error type logs go into the event log. The easy way to use this is to just associate the base exception with the error logging mechanism. Firstly I'll add a new Exception Handling Application block to my config file. I'll add a new Exception Policy to that, and as I'm only going to have a single policy for this project I'll leave the default name. This is the base that I'll work from for this example. I want to log all exceptions, in .NET every exception inherits from Exception, so I'll add a new exception type and choose System.Exception. After this is added I'll right click the new Exception type and add a new Logging Handler. In my logging handler I'll choose the LogCategory of Error, which I previously setup in the Logging Application Block and a FormatterType of TextFormatter as the event log is in text. Your App.config should look something like this:

 

Now we need to add some code to show how to use the exception. I'll add a new reference to

  • Microsoft.Practices.EnterpriseLibrary.ExceptionHandling
  • Microsoft.Parctices.EnterpriseLibrary.ExceptionHandling.Logging
(you'll need to browse to the DLL file to do so) and add a using to my code. Then I'll write some code to throw an exception, catch it and send it to the exception handler.
public static void TestExceptionHandling()
{
 Console.WriteLine("Exception handling starting");

 try
 {
  throw new System.Exception("Testing exception!");
 }
 catch (System.Exception Ex)
 {
  try
  {
   if (ExceptionPolicy.HandleException(Ex,
    "Exception Policy"))
   {
    throw;
   }
  }
  catch (Exception ExNew)
  {
   Console.WriteLine("Exception caught: " +
   ExNew.Message);
  }
 }

 Console.WriteLine("Exception handling finished");
 Console.ReadLine();
}

Now when I run this code my exception handler is going to use the error logging facility that I've already setup and log to the event log. It will also log to the log file because I've told the system to log everything to the log file as well. How easy was that?

Step 2 - Emailing Security Exceptions

I want to email security exceptions to someone becuase we need to be sure that we investigate these exceptions straight away. I've already setup warnings to be emailed to a user, this security exception sounds like it fits in this category so I'll re-use the warning log type the same as before. I don't want to use any of the custom exceptions so I'm going to create my own, I'll call it CustomSecurityException. Make sure you make it public because otherwise the enterprise library won't be able to see it. Here is the code for the custom exception.

public class CustomSecurityException : System.Exception
{
}

And as you can see it does nothing, it's just for example. Now we'll add a new handler for this type. Right click the policy, add a new type. You'll need to load an assembly, I selected my .exe file where the public exception class resides and it added my namespace and exception class to the selectable types. I selected my custom exception and then added a logging handler as before, this time selecting the Warning log category. Something to note, I've been told that the library is only loaded once into enterprise library configuraiton tool, if your objects are not there try restarting the environment and trying again. I didn't have this problem however. After running the application I get an email in my inbox specifying the exception details and the log format wrapper around it.

Step 3 - Rethrowing Database Exceptions

The last thing I want to do with my exception handling environment is have the ability to rethrow new exceptions after processing. This time I'm just going to catch a System.Data.DataException. It pays to know the object heirarchy for exceptions when handling exceptions. As before I add the System.Data.DataException type and this time I set it to "ThrowNewException" in the PostHandlingAction property. From here I added a Replace Handler to the DataException type and set the ReplaceExceptionType to System.Exception (select it from the list). I don't want the user seeing the exception so I put my own validation message ino there. It should look something like this:

Now when I write some code to throw the a Data Exception all I need to do is pass through an output exception to rethrow like so;

try
{
System.Exception newException;

if (ExceptionPolicy.HandleException(Ex,
"Exception Policy", out newException))
{
throw newException;
}
}
catch (System.Exception ExNew)
{
Console.WriteLine("Exception caught: " + ExNew.Message);
}

The rethrown exception shows the message in the screenshot above written out to the console.

Final Thoughts

Adding exception handling policies to your application is remarkably simple, even if it's something as simple as logging them to the event log you can be sure that every handled exception is picked up and handled. I can think of many real work examples in the past where I would have found this tool remarkably useful and I hope to find places for it in the future. One thing I did not investigate in this blog, and may have a look at next time, is creating a bunch of custom handlers for exceptions. This allows you to use your code structure to handle your exceptions when they happen and the exception block to catch and direct them.

Monday, 22 September 2008

Microsoft Enterprise Library #1 – Logging Application Block

This all started as a means to evaluate different logging techniques. A colleague of mine found the enterprise library in his searches and referred it to me so I decided it was worth looking into rather than going with my gut instinct and using Log4Net and Spring.Net. After my initial investigation into the logging application block I found a few things that I took for granted that Log4Net does out of the box (I’ll cover these below) but other than that it was a powerful logging engine that was far more easy to setup and manage changes through a graphical user interface. More importantly the enterprise library seemed to offer integration of logging with other important areas of the application that we are also currently evaluating solutions for, like exception handling, validation and caching. This blog entry is the first in several designed to evaluate the use of the application blocks included in the Microsoft Enterprise Library 4.0.

Step 1 - Download and Install the Enterprise Library 4.0

You can find the Library Installer here. I used the default settings when installing and had no problems with that. You will need to note the installation directory.

Step 2 - Setting up a Logging Block

First you need a project. I’ve created a console application and called it ApplicationBlocksTechDemo. Once you’ve got your project created you’ll need to add all the references and do a quick test to see if the block is working. I don’t want to re-invent the wheel with this post so I suggest if you don’t know how to setup the block then you should use this tutorial from Vikas Goyal. It’s very simple and straight forward and will give you the starting point for this guide. It is worth mentioning that in Enterprise Library 4.0 the Microsoft.Practices.ObjectBuilder seems to be called Microsoft.Practices.ObjectBuilder2. You will also need to browse for the DLL files directly; they are in the bin directory of your Enterprise Library installation directory. Add the references to your project for:

  • Microsoft.Practices.EnterpriseLibrary.Common
  • Microsoft.Practices.EnterpriseLibrary.Logging
  • Microsoft.Practices.ObjectBuilder2

Once these are added you’ll be able log the most basic of information, by default it will log to the event log. If you’re building an enterprise application you’ll almost certainly want to capture logs to many different locations so this tutorial will show you how to do that. At this point you should also add an App.config file to your project. Your project solution should look something like this now.

 

 

Step 2 - Adding the Logging Application Block In the bin directory of the Enterprise Library install directory you will find the EngLibConfig.exe file. Run the file and open the Config file for your solution that you just added. Right click the project and add a new logging application block, the default settings will be added. Save your changes and go back to your project. Your App.config file will ask to be re-loaded, when you choose yes you’ll be able to see the configuration that has been added. In my opinion this is part of the power of the Logging Application Block, you don’t need to edit the XML directly, so I close my logging Config region in my App.config.

Step 3 Adding My Requirements Before adding my requirements into the block, I need to understand the application block sections. These are:

  1. Filters – You can filter out log messages before they reach the distributor.
  2. Category Sources – These are the types of log messages you are capturing, with each one specifying the trace listener it will log to.
  3. Special Sources – Another way to capture the messages and send them to trace listeners.
  4. Trace Listeners – Output forms for the logs.
  5. Formatters – The format of the output.
My requirements are simple, I want to:
  1. Send errors to the event log
  2. Send warnings to an email address
  3. Log all messages to a log file using a simple one line format.

Step 4 - The Event Log

First remove the event log listener from the General category. Next Right click the category sources and add a new category, I called mine Error. Right click the category once it’s created and create a new trace listener reference. Choose the Formatted EventLog TraceListener (added by default) and save the Config file. Your Logging Application Block should look something like this. In your application add some code to create an error, it’s very simple:

 
 
using Microsoft.Practices.EnterpriseLibrary.Common;
using Microsoft.Practices.EnterpriseLibrary.Logging;
using Microsoft.Practices.ObjectBuilder2;
 
namespace ApplicationBlocksTechDemo
{
class Program
{
 static void Main(string[] args)
 {
  Console.WriteLine("Logging tool starting");
  Logger.Write("Error Message", "Error", 1, 1,
   System.Diagnostics.TraceEventType.Error);
  Console.ReadLine();
 }
}
}
 

When you check your application event log you’ll see an error after running your application now.

Step 5 - Send Warnings to an Email Address

Your requirements will probably be different to mine as mine are setup to show off some of the features that I like with the Enterprise Library, but if you want to email warnings to someone this is how it’s done. We don’t have a warning category yet, so add one in. I called mine Warning. We now need to create a Trace Listener to allow us to send warnings to an email address. Right click the Trace Listeners and add a new Email Trace Listener. Set all the parameters, they’re pretty straight forward. Right click your Warning category and add a new trace listener reference to your email trace listener.

 

Add in a line of code to log a Warning

Logger.Write("Warning Message", "Warning", 1, 1,
System.Diagnostics.TraceEventType.Warning);

and run your program. Warnings are now sent by email to the user specified.

Step 6 - Logging to a File

Lastly we want to log everything to a file, even our errors and warnings we’ve handled already. Also, we only want each log entry to take up one line so that we can review the information a little more easily. We’ll comma separate the fields so we can open up the log in Excel for reviewing if required. We also don’t want our log file growing out of proportion so that it gets to a size that is impossible to read. The first thing we need to do is add a new formatter. Right click the formatters and choose to add a new one, I called mine LogFileCSV Formatter. You can edit the template once it’s added, mine looks like this:

{category},{timestamp},{priority},{machine},{message}

Now we need a new Trace Listener, add a new one of type “Rolling Flat File Trace Listener”. This will create a new file based on the rules you choose at the intervals you set. The options are pretty straight forward, I chose mine to create a new file every 1000Kbs and to move the old one to a timestamp pattern of yyyy-MM-dd. Choose your new formatter as the formatter option and make sure you remove the header and footer if you want it to show only one line. Finally under Special Sources add the new trace listener to the All Events option, this will log every event into your log file.

Finished

Looking back over my project, I’ve added in the framework to complete all my requirements without writing a single line of code. My requirements were incredibly basic, but there is no reason why it should take more than this to create your logging framework.

Log4Net Comparison

I’ve always used Log4Net in the past, and I will do so again for small projects that don’t require an enterprise level solution. But why would I use the Logging Application Block over Log4Net for an enterprise level solution. Here are some reasons. 1. Speed I setup a test where two different applications would log 10 000 messages to a file and 10 000 messages to the event log in exactly the same format. I tried to make the code as efficient as possible in both cases. The Logging Application Block completed this task in a little over one second, the Log4Net test took a little over 8 seconds. Most of this was accessing the event log in both cases, logging to file was far quicker as you would expect. 2. No XML editing Using the Logging Application Block you can utilize the graphical interface and are not required to edit the XML Config at all. For small projects dealing with XML is easy as your logging isn’t required to do much, but the more complex your requirements become the harder it is to sift through the XML. The Logging Application block excels here. Why would you use Log4Net over the Logging Application Block? 1. Smaller Footprint Log4Net is certainly smaller. If this is an issue for you then perhaps Log4Net is the better option. 2. Tracing back to source If you wish to trace back to the source (log the class or function that threw the exception) then Log4Net handles this natively. You would have to implement this yourself in the Logging Application Block (though admittedly it is not very difficult and the time you would have to spend writing XML for Log4Net would easily cover it).

Final Notes

No matter which logging framework you choose I highly recommend that you abstract it from the rest of your code and hide the implementation so that you can switch without any trouble should the need arise. The Logging Application Block is remarkably simple to implement and even if it is a sledge hammer I wouldn’t want to break up a block of concrete with a chisel. Interoperability with the other Application Blocks in the Enterprise Library will be something that I will investigate very shortly, this promises to be the main reason I wish to use the Logging Application Block.

Wednesday, 17 September 2008

Navigation within MVP

I've been doing a lot of research into various frameworks so I can make an informed decision on my new application I'm developing. In evaluating MVP, ASP.NET MVC and the traditional three tiered approach I came across a question that I found it difficult to find informaton on. How should navigation be handled in a MVP ASP.NET application and who should handle it? Well the answer to the first question was easy, any way you like. You could use Spring.NET libraries, PageMethods, write your own, Response.Redirect, it doesn't matter and MVP doesn't prescribe any of the above. But the question that was more important is who should handle it? I found a great article on CodeProject on Model View Presenter implementation that was very useful. The article points out that a navigation raised through the presenter should be raised in the View via an event. So in a simple example if you have a login form your presenter might look like this:
public class LoginController
{
    // The view interface for manipulating the user interface.
    ILoginView view;

    public LoginController(ILoginView view)
    {
        this.view = view;
        SubscriveToEvents();
    }

    private void SubscriveToEvents()
    {
        view.OnLogin += OnLogin;
    }

    public void OnLogin(object sender, EventArgs e)
    {
        if (Login(view.userName, view.password))
        {
            view.LoginSuccessful();
        }
    }
}
And in your view you will simply implement the login successful method (don't forget to add it to the interface). This means that the business rules for navigation are actully sitting in your view layer, however if you use a navigation framework they will be well encapsulated and simple to read and understand. I have not done any investigation into which framework would be appropriate for my project, if we choose to go with MVP then this will have to be done and I'll blog that too, but in the mean time have a look at the Spring.NET libraries and PageMethods.

Monday, 15 September 2008

Application Framework Choices - ASP.NET

So I was going to blog about some of the stuff that went down at TechEd, and while that'd be fun every man and his dog who was there has something to say. For me one of the mail purposes of attending tech.ed was to get some information about emerging and existing frameworks that exist around the .NET web development sphere. For me there are a few choices that fit the requirements of our new application that we're developing here. 1. ASP.NET MVC 2. ASP.NET with the MVP pattern 3. Standard 3 Tier Architecture What am I looking for? I'm looking for a few things, with the large number of developers and the multiple team setup that we'll be using for this project there are a few things that would make life simpler to manage. In particulare the structure of the framework we choose is very important. Enforced Seperation of Concerns This is the ability for the framework we choose to force the developer to seperate the model, view and controller concerns. The benefits include more testable code, more re-usable code and more understandable code. Often the seperation of concerns is not enforced allowing the developer to code business rules into the view should they see fit, with the large team we are putting together I would consider this a disadvantage. Simplicity The chosen framework should ideally be simple. A complex framework will either add complexity to the code or add significant developer time to setup the framework for each process. Readable Too often overlooked, the application code implementing the business rules and interface should be as readable as possible. A framework that forces developers to make non-standard calls, constantly serialise or lose track of the call stack will make develop far more difficult than I would like in a large team of varying levels of expertise. Efficient Any processing overhead is a large negative. Robust A proven robust framework will ease the business' concerns when pitching the application architecture. Supported There's nothing worse than choosing a framework only to find that before the product is released it is no longer supported. Any choice made when considering the framework must consider the chance that the technology may become redundant. Productive use of Namespaces There will be an obscene amount of code requirement to complete this project. The last thing we need is to have to sift through the code to find the class we need. The framework would ideally support seperation of the code into production namespaces that will assist in identifying issues and building structure. Proven A technology that's proven in the industry will help alleviate concerns that the framework may not be able to fill all of our business needs. Fast Development Time Just like every other project, ever, our project has tight time constraints. The faster we can build code in the framework the better, but not at the risk of the attributes above. Testable We are using TDD as our testing framework for our project, whatever design framework we choose it should be as testable as possible so that we can cover as much of our code as possible. ASP.NET MVC This is a new technology currently in technical preview 5 with the beta release to come out soon. That in itself is a large risk, however a collegue of mine mentioned that the world is in perpetual beta and soon after release a technology is likely to become redundant. Scott Hanselman mentioned at his tech.ed talk that there is no such thing as a professional developer, we're all ametures. With the constant release of technology I don't believe it would be a good idea to discount a technology just because it's new. After the session on architectural considerations for the ASP.NET framework with Tatham Oddie I had some time to speak to him breifly about considering the ASP.NET MVC framework for a large production applicaiton. His response was that the project is very mature and definately worth considering as production ready. Benefits
  1. Enforced seperation of concerns
  2. Simplicity
  3. Readable
  4. Supported
  5. Efficient
  6. Fast Development Time
  7. Testable
While the technolgy is new, it does product a very pretty code output. It is remarkably simple, but also very extendable in case it doesn't quite fill your needs. As an MS product, it is well supported and will begin to pave the way for the future of web applications under .NET. Disadvantages
  1. Robust
  2. Productive use of Namespaces
  3. Proven
There are obvious concerns about using the ASP.NET MVC framework due to the unproven nature of the tool. Tatham Oddie did mention that Scott Hanselman has MVC in use on several projects already, and that was re-assuring, however it is a risk. The open nature of the environment helps reduce this risk, something doesn't work? Do it yourself. The biggest problem I've come across so far is the the lack of productive namespaces for controllers. You cannot create a controller with a namespace outside of the Controllers namespace, or if you do it will have no effect. I may wish to group my application into several key areas, for example /Inventory/Transactions/Edit/4 is not feasable, instead I would have to use /InventoryTransactions/Edit/4. There are ways around the URL, but the root of the problem remains the same, all of my controller names must be unique. Steve Sanderson has a good blog post (here: http://blog.codeville.net/2008/07/30/partitioning-an-aspnet-mvc-application-into-separate-areas/) on how to seperate these by extending the environment, however it is quite complex and requires overriding the default routing, also there may be gotcha's as it is untested currently. This may be a good option for us however as there is little risk in using it as we can always refactor back to the regular method without too much trouble. ASP.NET MVP Design Pattern The Model-View-Presenter framework has actually been broken up into two different types, the type I used for my prototype and testing is the Supervising Controller. They are differentiated by how much logic is placed in the view layer. MVP is not a framework in that it is something you build into your application on your own and thus a design pattern. There are many advantages of this over a framework, also many disadvantages. Firstly you get greater control over your code, you can implement as much of the pattern as you wish, circumvent the pattern where you wish and choose your own interpretation of the pattern where you see fit. This is also a major disadvantage. MVP aims to seperate concerns much the same as MVC with a 3 layer approach. Instead of a framework you plug into your application, you must develop the code to implement this stucture yourself. I did not find it very hard, and the output code was quite pretty. I made the mistake of doing a website project rather than an ASP.NET Web Project, so unit testing was quite difficult for me, however I fould the seperation of concerns to be quite simple to test and abstracting the business logic from the user interface was very easy. My biggest problem with this model is all that extra code I had to write. With MVC I implemented a form (view), a controller and a databound. With MVP I had to create the interface structure for the view layer, the ASP.NET page, bind the page to the interface, create the controller, bind the interface to the controller and verify all the events and this is without any form of data binding (which would add more difficulty to each view). Benefits
  1. Seperation of Concerns (not enforced)
  2. Readable
  3. Efficient
  4. Robust
  5. Supported
  6. Productive Use of Namespaces
  7. Proven
  8. Testable
The seperation of concerns was not enforced in MVP, the developer could easily circumvent it and the code used to do so could potentially be very hard to track down. The output code was very readable, far more than I had expected, but I would expect my developers to know the framework quite well first before they would find it easily readable. The design pattern is well used and has been around for a very long time, there is no reason for it to ben any less efficient or less robust than straight ASP.NET that I'll investigate next. I also had full control over my own namespaces, which was far more enjoyable to work with. I think my resulting code would be far more beautiful, even if it did take significantly longer to develop. Disadvantages
  1. Enforced Seperation of Concerns
  2. Fast Development Time
  3. Simplicity
It cannot be said that MVP is simple. Compared to ASP.NET MVC is is very complicated and difficult to grasp first time around. It will take a fair chunk of time longer to develop an ASP.NET application in MVP compared to ASP.NET MVC and the seperation of concerns is not enforced. Standard 3 Tier Architecture Using code behind, a business layer and a data layer I created the same simple application using the standard 3 tier architecture. Events were tied to the code behind class and business rules were encapsulated into classes in the business layer. I used an ASP.NET Web Forms Project this time so adding in the unit testing project was a breeze, however this is where the problem with code behind comes into play. I found it very difficult to test the code behind class and there is a fair bit of code and logic required here to tie the events to the business logic. Benefits
  1. Efficient
  2. Simplicity
  3. Supported
  4. Productive Use of Namespaces
  5. Proven
  6. Fast Development Time
I am hesitent to put #5 into this category. My project was increadibly small and simple and the standard 3 tiered model was by far the fastest because I didn't have to learn anything and I had done similar before many times. I suspect that in a large project it would be fare more complex and the lack of seperation may become an issue. This method does allow you to setup your namespaces as you see fit, it is a supported technology just like MVP and has been proven to work many times over in other projects. Disadvantages
  1. Enforced Seperation of Concerns
  2. Readable
  3. Robust
  4. Testable
There may not be many drawbacks in this framework, however they are major. The resulting code left large chunks of very difficult to test code blocks in the view layer. This will effect the robustness of the code heavily if we are unable to adequately test as much as we would like. The resulting code is often far more difficult to maintain due to the direct tie to the view layer of the business logic. Finally the lack of seperation makes the code more difficult to read in large projects.
Results Unfortunately the results were not as conclusive as I would have hoped. Each of the three methods I investigated has their own advantages and drawbacks. I think if I can get past the fact that ASP.NET MVC has not yet been officially released and that my controller names must be unique then it would be the best framework of the lot. I think for these tools it depends a lot on the size of your project. If you have a small project then you probably cannot go past the 3 tiered approach. The time required to get the code into place will more than make up for your difficulty in unit testing the web forms over a short period. For longer projects with large a large code base I think the choice is between the other two, which you choose at this point in time will depend on your willingness to work with prototype technology. For my project, I'm going to recommend ASP.NET MVC to guage the reaction of the business in working with prototype technology verses the saved development time required to implement. I think we can get past the unique controller names without much trouble and we could implement namespace controllers as mentioned above. If they are unwilling to work with such new technology then I will happily suggest MVP (Supervisor Controller) as the alternative with the small but not insignificant extra development time it will require.

Wednesday, 3 September 2008

TechEd

So here I am at tech ed, not sure how much I'm going to get out of the next three days at all but there are lots of events that I really want to see. Hopefully I'll be able to learn something useful, suppose I'll just have to wait and find out.