Friday, November 1, 2013

Azure and the cloud in the real world

So, I haven’t blogged in a really long time. Quite frankly, when you’re working with LOB apps for years it gets kind of stale. You can only figure out so many techniques to do the same set of things and most of those I get from other people’s blogs.
I’m working on a new project right now that required some new approaches, some new architecture, some new technologies. I’m developing a site that’s going to need to scale potentially massively and rapidly so I started my first serious look into Microsoft Azure.
I probably only know enough to be dangerous at this point but so far I’ve had a pretty pleasant experience with Azure. I think the tutorials provided by Microsoft and articles I see on the internets are really well done in explaining the “HOWTO” type of content. What I do see as vague to myself at first, and many others is the “why/where” type of questions. Why should I use Azure Table Storage, when does it make sense, when should I prefer Azure SQL Storage, etc.

My answer was both and at the same time. The site I’m developing will have a search engine side of things as well as a social media aspect to it. If you think developing a social media site of any kind lends itself well to a SQL database approach then by all means stick with that approach but you’re likely a stronger SQL person than I. My head starts to swim a little when I consider schema changes required by the fluidity of a social media paradigm over time. Not to mention just the sheer weight and intensity of wrapping my head around a simple “like” button that could be used for what would be rows in 1000 different tables. Think about all of the different “kinds” of things one can “like” or “+1” and you’ll probably come to a similar conclusion, SQL is doable, but man would it take a long time to get perfect. It would also bog down in the myriad of challenges you’re bound to face implementing new features, etc.

I do think Azure SQL Storage makes great sense for the things I want to search on vs. Azure Table Storage. In playing with prototyping the Table Storage stuff, it’s just not really designed for performant searching. What it does do really well however, is abstract the concerns of an ever shifting schema. Oh, there are still concerns, especially when it comes to changing your object models and TableEntities but nothing compared to updating a SQL database.

Working with Azure in general made me rethink my architecture approach completely. Not for the better I can assure you but an architecture that suits my needs for rapid development of a website with the promise of some help down the road to make it a more distributed and proper architecture approach. One of the things Azure liberates me from in the short term is concern of how my application will scale due to my architecture choices. At least as a start-up where choosing to scale our application simply by creating another instance of the entire website is a simple and valid approach.

So, I created a simple approach: my entire work flow would be based on an abused UnitOfWork pattern. Those familiar typically know this as a database centric design pattern but the actual pattern is generic of course. I expanded my UnitOfWork to contain everything I needed to know from the moment a user interacts with the website, throughout the full round-trip of the interaction. This website is in C#/Asp.Net MVC and starts a UOW from the controller action and maintains that object through the lifecycle of the page request right up into the view if needed (although usually I do create a proper POCO view model with simple primitive properties.

So what’s this thing look like?

    public interface IUnitOfWork : IDisposable, ICommunicationUnit
        object LookupId { get; set; }

        ILoggingHandler LoggingHandler { get; set; }

        Controller Controller { get; set; }

        WebUserViewModel CurrentWebUser { get; set; }

        IFabnuDataContext DataContext { get; set; }

        bool IsStartValid { get; set; }

        void SaveMessagesToTempData();
        void LoadMessagesFromTempData();
        void CopyWarningsToModelState();

        ActionResult OverrideResult { get; set; }

    public interface IUnitOfWork<TViewModel> : IUnitOfWork
        TViewModel ViewModel { get; set; }

        bool Validate(AbstractValidator<TViewModel> validator);

So, I know what you’re thinking, that’s crazy talk, and you’re right, partially. Consider this however. Yes, I’m taking the controller in its entirety end to end and, for now, the entire web framework is baked in down to the repository and vice-versa. There will be no easy separation of code, there will be overloaded objects with far too many responsibilities, etc. I’m already starting to see some of that encroach but I’m willing to let it stand for today.

What I am doing however is separating my code into logical layers just as if I would if this were a true n-Tier approach. Although once scaled properly into the Azure world I will not have access to a controller in the entire codebase, I will have access to all of the Azure based tiers. What that means is, all of my code designed to access Azure Table Storage, Azure SQL, Azure Blobs, etc. will still work perfectly well once I move chunks of my logical “service tier” into the cloud as worker roles. Yes, I will have to remove any code dealing with controllers directly but that’s been minimal so far and is acceptable loss to me for the time being.

The reason this approach would never make sense for anything but a small website of course is that I couldn’t reliably scale this architecture before. Azure allows me to simply duplicate my website as if it were an overblown web role while I get the time to really harness the code to Azure and split each service class into its own worker role and each controller background into its own web role that could be scaled more effectively in both cost and code concerns.

Right now I piggy-back an ICommunicationUnit interface between my UOW and my view models. Because there is only a single unit of work, a single DB connection throughout the entire request, a single controller/HTTP request instance I’m able to make a lot of shortcuts otherwise impossible. My data context has access to the HTTP cache and can update the cache when you save something to the database reducing the overhead and complexity of keeping the cache from going stale. I can also save a portion of my ViewModel properties off to Table Storage while saving other properties off to the SQL database. This may sound incredibly error prone, and I think it absolutely would be, except I’ve implemented a simple XML file –> T4 template code generation DSL that allows me to modify a few XML files and dictate which properties of the composite view model are saved to SQL, which are saved to Azure storage and which are just in-process viewmodel properties used to communicate through the entire application. 

I collect all of my errors and warning in a simple domain specific list of message POCOs containing some basic info like the level of the message: msg|warning|error. Warnings in my project are things that end up in the MVC ModelState that get propagated back to the user such as “Password field requires 6 letters” while errors contain exception messages and stack traces. At the conclusion of the unit of work, when I’m about to return a view back to the consumer I simply iterate my message list built up through all of the tiers and shove any errors into Table Storage with a partition key = today’s date and a RowKey of a generated GUID. Inside the error table entity I store the UserAgentString, what Url and UrlReferrer the user was on when the exception occurred and any other useful info available to me with the entire web stack at my fingertips. Errors/Exceptions are a great thing to put in storage, what do I care if the schema changes over time?

Other things that lend themselves well to Table Storage are user profiles, info that changes schema over time. I add a new property to my XML file, flag it as “TableStorage” and my ViewModel and TableEntity get updated with the new property and I let AutoMapper handle the rest. My models change, my code to load/save to entity storage does not.

I’m very keen on seeing how this all plays out as this application truly needs to scale. I realize this article is a bit nebulous but I thought I’d throw out my n00b approach to Azure and what’s available on the stack right now. To me, working with the code about a solid month into development now, it’s panning out nicely. I have yet to start gophering off into distinct web roles but I’m not scared. The lion’s share of code is talking to the Azure tiers themselves, the web stack code isn’t going to be difficult to segregate. The Azure SQL Database and Table Storage will always be available to all of my tiers moving forward, my UOW will certainly change over time but all in a centralized manageable core. I can pick and choose moving forward what codebases need to be optimized and scaled. The first likely candidate would be the authentication and authorization code blocks as they’re prone to get hit the hardest and most often in general.

Once I get to the point I have that tucked away in a service tier truly communicating via REST with the rest of the architecture I’ll be happy to report if I was right, or… wrong… Another thing I’m yearning to see play out is moving the entire ICommunicationUnit paradigm into the Azure Queue approach. No more worrying about how to communicate from controller to service to data and back out. I’m not saying there aren't better approaches certainly but the ability to connect to a queue system and pick that message back up later based on logged in user, or area of code, or basically any criteria you want: powerful stuff.

Thursday, February 24, 2011

AccuTimeCard for vWorker (RAC) crashes

The AccuTimeCard for vWorker (previously RAC – Rent A Coder) crashes when you first run it.

The exception dump is listed below is a Google-friendly search helper for people having the same issue.

I’m running Windows 7 (Ultimate) x64 and got this exception as soon as I ran the application. Thankfully, the application was written in .net so I was able to use the .Net Reflector to resolve the issue. The exception was occurring in the Form Load event in the application so I took that into disassembly in the reflector coming up with the following:

private void Form1_Load(object sender, EventArgs e)
this.Visible = false;
StringBuilder builder
= new StringBuilder();
"-classpath ");
@"resources\jars\wsdl4j-1.5.1.jar ");
string arguments = builder.ToString();
"javaw.exe", arguments);

The offending line, the file it cannot find is, of course, Process.Start(“javaw.exe”, arguments); One quick check concluded that the path to the Java-Runtime was not present in my PATH environment variable.

Since non-technical people now use I will try to explain how to add this path to your own environmental variable PATH. These instructions are for Windows 7 and will likely work fairly closely to Vista (for those of you who are masochists and still use that OS Smile)


First, find your install of the Java runtime. Mine was located at: C:\Program Files (x86)\Java\jre1.6.0_13\bin. If you’re not using a 64 bit operating system it should be under C:\Program Files\Java\jre1.6.0_13\bin. Copy that path to your clipboard.

Open up your Control Panel and go to “System and Security”, “System”, “Advanced System Settings”. I’m sure there’s a faster way to do this but that’ll work for now. Go to the advanced tab: we’re getting really advanced now! Click on “Environment Variables”. In the second list box you’ll need to local your PATH variable. Double click on that and you should see something like the following when you’re done pasting at the end of your variable setting. Please note it’s very important you add a semi-colon “;” (no quotes) between the last path setting and your newly pasted path to your Java Runtime binary folder.




microsoft.NET Framework
The sistem cannot find the file specified
See the end of this message for details on invoking
just-in-time (JIT) debugging instead of this dialog box.
************** Exception Text **************
System.ComponentModel.Win32Exception: The system cannot find the file specified
at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo)
at System.Diagnostics.Process.Start(ProcessStartInfo startInfo)
at RacTimeCardLauncher.Form1.Form1_Load(Object sender, EventArgs e)
at System.EventHandler.Invoke(Object sender, EventArgs e)
at System.Windows.Forms.Form.OnLoad(EventArgs e)
at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)
at System.Windows.Forms.Control.CreateControl()
at System.Windows.Forms.Control.WmShowWindow(Message& m)
at System.Windows.Forms.Control.WndProc(Message& m)
at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
************** Loaded Assemblies **************
Assembly Version:
Win32 Version: 2.0.50727.4927 (NetFXspW7.050727-4900)
CodeBase: file:///C:/Windows/Microsoft.NET/Framework64/v2.0.50727/mscorlib.dll
Assembly Version:
Win32 Version:
CodeBase: file:///C:/Users/octa/AppData/Local/Exhedra/RacTimeCard/RacTimeCardLauncher.exe
Assembly Version:
Win32 Version: 8.0.50727.4927 (NetFXspW7.050727-4900)
CodeBase: file:///C:/Windows/assembly/GAC_MSIL/Microsoft.VisualBasic/
Assembly Version:
Win32 Version: 2.0.50727.4927 (NetFXspW7.050727-4900)
CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System/
Assembly Version:
Win32 Version: 2.0.50727.4927 (NetFXspW7.050727-4900)
CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System.Windows.Forms/
Assembly Version:
Win32 Version: 2.0.50727.4927 (NetFXspW7.050727-4900)
CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System.Drawing/
Assembly Version:
Win32 Version: 2.0.50727.4927 (NetFXspW7.050727-4900)
CodeBase: file:///C:/Windows/assembly/GAC_MSIL/System.Runtime.Remoting/
************** JIT Debugging **************
To enable just-in-time (JIT) debugging, the .config file for this
application or computer (machine.config) must have the
jitDebugging value set in the section.
The application must also be compiled with debugging
For example:
< jitDebugging="true" />
When JIT debugging is enabled, any unhandled exception
will be sent to the JIT debugger registered on the computer
rather than be handled by this dialog box.

Saturday, February 12, 2011

Software Development Productivity in Windows 7

   This won’t replace my AutoHotKey but it’s a step in the right direction from Microsoft Windows.

   I recently stumbled across some keyboard shortcuts in Windows 7 that have increased my productivity as a software developer. If you’re a developer you probably favor the keyboard over the mouse when getting your work done. I’m constantly on the lookout for keyboard shortcuts and new automation tools to help liberate me from that pesky mouse and speed up my work process.

   Windows 7 has some nice new keyboard features I didn’t know about until recently. I actually stumbled upon them quite by accident and did a little exploring. When I put 2+2 together in my head I came up with a quick way to jump around to all of the applications I use frequently during the day.

   Here are some applications I use constantly and their corresponding shortcut key combinations. These keyboard combinations are built into Windows (since Vista) as described by Wikipedia.


   That’s right – starting with the first application running it’s the Windows Key + 1 [ 2, 3, … n ] to bring up the application (Win+0 is the 10th and final application shortcut in the chain).

   Now the first thing that’s nice is that I will get the running instance of the application if the application is already running or a new instance of the application if it is not (singleton approach). If I want to start a new instance of the application rather than pop up the instance already running all I need to do is Win+Shift+1 [2, 3, … n].

   If I have multiple instances already running repeatedly hitting Win+Ctrl+1, Win+Ctrl+1 will alternate between (in my case) my instances of Visual Studio 2010 that are running. Alternatively, simply pressing Win+1, Win+1 will alternate between the desktop preview windows for that application and releasing will bring up the application instance.

   As long as I keep these application pinned to my taskbar in the same position Win+1 will ALWAYS be Visual Studio 2010, Win+2 will ALWAYS be Microsoft Blend, etc.. That is a big deal coming from the Alt+Tab world.

“Now you may be thinking “so what” I use a keyboard mapping or macro program called AutoHotKey (or something similar) and already have this set up. Me too - 

but wait, there’s more!

   If we press Win+Alt+1 [ 2, 3, … n] we get access to the application’s '”Jump List”. This is where these keyboard shortcuts really start to shine. Here’s what I get when I press Win+Alt+1 asking for Visual Studio 2010’s Jump List – and yep, I can use the good old arrow keeps to get to the solutions I’m currently working on


   I’m currently doing development in WPF so it is a HUGE time saver to switch between Visual Studio & Blend using Win+1 and Win+2 respectively. Alt+Tab is great but  sometimes you go to other application in between using your core tools and it’s always that few milliseconds spent recognizing where the application I need is in the Alt-Tab chain that drives me insane.

   By the way, if you use Alt+Tab quite a lot with many windows open, you can use the arrow keys to navigate the large tiled list. For instance the down arrow will take you down a row of applications after you’ve started the Alt+Tab maneuver, skipping a few Alt+Tab key iterations.

Sunday, January 17, 2010

Using Microsoft Live Writer with Blogger

I was recently pondering my reluctance to post to my blog. I analyzed the problem and came up with the following facts…

  • I don’t like the blogger interface for posting to my blog. Especially when I need to post code samples.
  • I don’t typically take the time to post something cool I just did with some code I’ve written because I would typically want to upload a new project that people could download and explore should they want to.
  • I won’t typically post to my blog after a long day of writing code.

Having made these identifications this is the first step in solving the problems. I’m certainly not a lazy person. I want to post more useful content to my blog so these issues need to be resolved for me personally before blogging on a regular basis will become reality.

Setting up Live Writer with blogger is a snap. I simply went to the Tools menu then Accounts…


… entered in the required URL for my blog, my username and password ( I use my Google credentials ) and viola… Windows Live Writer is up and running. I’m publishing as I go as I write this. A very cool feature.


So far, so good but what about source code? I use the Manoli Formatter which is quite good. I actually downloaded the source code and started to write myself a little blogger application to handle my blog posts or at least allow me to cut/paste the content into the blogger editor. That didn’t pan out due to time constraints and the fact that I wanted something now, now, now.

Anyway, Live Writer seems to bridge that gap for me so far. The previous images (which are totally superfluous) since the setup process is so easy were all handled automagically via Live Writer. That means I don’t have to upload the image to flicker, get the public link to the image, cut/paste and mangle that link into Blogger, etc. OK, I’m making it sound harder than it is but repetitive processes like that drive me insane. That’s why I’m a software developer… I’m too lazy to do anything twice.

So now, lets try some code… If I hop over to Visual Studio and copy some generic Cache code I wrote and paste it into Manoli I get my CSS tagged HTML back and paste it into Live Writer I can preview what my code snippet is going to look like quite easily…


That’s good stuff! I can immediately see that one code line is wrapping due to the format, styling and layout of my blog. Poof! No worries, I can just edit my code in Visual Studio and paste back into Manoli for a quick update…

public class Cache<TKey, TValue>
private readonly Dictionary<TKey, TValue> cache = 
    new Dictionary<TKey, TValue>();

public TValue this[TKey key] 
return cache.ContainsKey(key) ? cache[key] : default(TValue); 
cache[key] = value; 

public TValue Fetch(TKey key, TValue value)
if (!cache.ContainsKey(key))
this[key] = value;

return value;

…and there you have it. Easy code that will fit inside my blog nicely without too much hassle. So far, Windows Live Writer just scored mega kudos with me. Look for more blogging soon. Maybe I can keep my New Years Resolution after all. Note: Live Writer is by no means specific to Blogger. It supports many blogging platforms. Let me know what you like and don’t like about it. My 5 minute analysis is “Why wasn’t I using this the minute it was released”? Anyone use any similar applications? Love to hear your stories. My next task: Making the format of my blog more Web 2.0 compliant.

Sunday, January 3, 2010

Linq Queries against most collections including ListView, ListViewItemCollection, ControlCollection or anything IEnumerable

The Problem:

You can not run Linq queries against many Framework collections such as ListViewItems and Controls in an immediately obvious manner.

if (list.Items.All(item => item.Checked))
    return true;

The previous code snippet won't compile, won't work and won't shove you in the right direction via IntelliSense, Online Help, or by performing a super-quick Google for the hopelessly attention-deficit disordered such as myself. (more on the Googling later)

The Fix:

Introducing the Enumerable .Cast (TResult) Method

if (list.Items.Cast<ListViewItem>().All(item => item.Checked)
    return true;

Using this method you can rewrite the original code quite easily to this code and Linq query the collection to death with all of your favorite little Linq sledgehammers...

The Why:

Background story for those of you who didn't jump ship to go off using the solution...

I was trying to something I thought would be very straight-forward using Linq. I wanted to sync a tri-state "parent-relationship" CheckBox control with a ListView control that contained "child-relationship" items that had their own respective item specific checkboxes. The parent to child relationship here is conceptual and not baked into the controls, we're talking a simple CheckBox and ListView here.

You've all probably done something similar whether it be with a TreeView using checkboxes containing items with checkboxes. You've certainly seen this behavior if you've ever done a backup and selected what you want backed up on a hard drive. The concept is simple: if all child items are checked or unchecked, the parent CheckBox should be Checked or Unchecked accordingly. If some of the child objects are checked and some aren't, the parent CheckBox should be Indeterminate.

(The backup then flubs your selection, doesn't back up your nicely selected SQL database, and can't seem to adequately backup to your 1TB external HD popping up endless dialogs to let you know how inefficient it truly is but that's a whole different post altogether.)

All fine and dandy, this will be easy, right? Well, the first thing I wanted to do is say something like...

if (listEmployees.Items.All(λ => λ.Checked))
    checkEmployees.CheckState = CheckState.Checked;
else if (listEmployees.Items.All(λ => !λ.Checked))
    checkEmployees.CheckState = CheckState.Unchecked;
    checkEmployees.CheckState = CheckState.Indeterminate;

Obviously, my ListView is a list of Employee business objects and my CheckBoxes on my list items are per employee item that I'm displaying. Nothing fancy here.

Note: I use the λ character now for a lot of my Linq statements after reading this question about LINQ to SQL business object creation best practices. I agree that it's technically not the most accurate usage of the Lambda characer however it does clarify the code (to me) and reduces the chance of variable declaration conflicts with Linq queries.

So, I'm ready to compile, skip testing, deem this code ready for production and ship it to my hungry client when this little nastygram pops up during compile...

'System.Windows.Forms.ListView.ListViewItemCollection' does not contain a definition for 'All' and no extension method 'All' accepting a first argument of type 'System.Windows.Forms.ListView.ListViewItemCollection' could be found (are you missing a using directive or an assembly reference?)

Grr. OK, so off to Google I go (oh you do it too!)... I land on
LINQ on ListView.Items (ListViewItemCollection) telling me in no uncertain terms, with an accepted answer, that this just can't be done. Piffle! No way! I'm outraged! Linq can do anything!

So, I typically look at one Google answer like this and go back to the code to see what I, with all my omnipotent developer powers, can figure out. (Usually I'm humbled to admit the same defeat as the previous blog poster but not this time!

So, instinct and too much coffee tells me to look up 2 things...

1.) What are the requirements, more specifically, the "where" clause, if any, of the Linq method "All"
2.) If the ListView.Items property doesn't meet this requirement, then why doesn't it dangit!?!

So, doing a "Go to definition" on the All Linq Extension method I come up with the following in my sweet little meta data viewer.

public static bool All<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate);

The Linq All entension method requires an IEnumerable<TResult> as the source of the extension meaning that only objects that expose IEnumerable<TResult> as part of their class definition will get picked up, and be extensible using the Linq system. There's no where clause, it must be an IEnumerable<TResult> supported object whether by inheritance or interface support.

OK, I was thinking just IEnumerable personally but actually IEnumerable<TResult> makes sense since Linq has to perform anonymous queries using properties of the object that's being extended. Meaning if Linq extended IEnumerable (pure) that's great but when I went to do my neat little .Checked (is true) statement Linq wouldn't know what the heck a .Checked was because IEnumerable would be based on an enumerable object collection.

Fine, makes sense, so what the heck is a ListView.Items collection then? Back over to the "Go to definition" meta lookup for that guy...

[Editor("System.Windows.Forms.Design.ListViewItemCollectionEditor, System.Design, Version=, 
Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a", typeof (UITypeEditor))]
public ListViewItemCollection Items { get; }

Uh...OK, so the Items property is this ListViewItemCollection class, but what is that? AGAIN with the lookup I come to...

public class ListViewItemCollection : IList, ICollection, IEnumerable

Aha! It's the problem I just defined in my mind. ListViewItemCollection is just an IEnumerable collection (of objects). The indexer...

public virtual ListViewItem this[int index] { get; set; }

.. is the reason we can easily deal with ListViewItems when we're doing foreach loops on the ListView.Items property. It does the boxing for us. Interesting performance hit, will have to check that out later as Microsoft is probably (hopefully) doing something to defer the boxing cost of each item in the List.

OK, so problem identified but what could I do about it. I'd love to say it was some methodical deduction that brought me to the Cast method but I just did an IntelliSense on the ListView.Items property to see what Linq extensions were available to me.

I started looking into AsQueryable() but I couldn't quite achieve what I wanted. Then, I noticed the little Cast beneath it (in IntelliSense). Poof, that worked.

Voila, now you can run your handy Linq queries against most anything IEnumerable as long as you can, with some accuracy, cast each member to a specific type.

On the Googling: I thought to myself cool! I figured out a problem everyone can use everywhere at all times. Then that creeping feeling overcame me that, "Nah, that was too easy" so I went back to Google. I wasn't a pioneer after all, drat. A Good, practical LINQ example shows you the same technique with a bit more detail and insight (but less comedy) than I came up with. I didn't realize that Linq works on Sequences so you should check this post out as well for a better understanding. Our similarities in deducing the same conclusion was a bit eerie however but I'll let that slide and drink more java (coffee not the language).

I will post my final solution to the checkbox issue itself because it screams reusable code to me. I will refactor my solution into something that isn't bound to specific controls, maybe not even Windows.Forms controls and post it here later.

I made a New Years Resolution to blog at least 4 times a week and mean to keep it up so this is the first installment (late already).

That's all for today and happy Linq'ing...

Friday, October 23, 2009

Where does my blog title come from?

In case you are wondering about the odd title of my blog it is a little inside joke. In addition to being an old school software developer I'm also an old school gamer. The reference is best described here: All your base are belong to us. It's just a simple play on words built on an old poorly translated game dialog.

Using the ResolveUrl method inside your class library Business Logic Layer (BLL)

Getting around passing HttpContext.Current to your business logic layer, or BLL.

The Problem: 
You want to use handy methods like ResolveUrl inside your Business Logic Layer (BLL) which is a class library. You don't want to reference System.Web or any .Net framework library beneath the System.Web subsystem inside your BLL libraries.

The Fix:
Create a small Utility class inside your class library and pass it a delegate. I know, I'm not a big fan of Utility (or similar) classes but sometimes you need a little tool-belt class for miscellaneous utilitarian methods. I actually have a Common class library that has even less dependencies than my typical BLL library which is where I stuck this class. Obviously, place the code where appropriate for your solution.

using System; 
namespace Common 
    public static class Utility 
        public static Func<string, string> ResolveUrl; 

Now in your global.asax place the following.

private void Session_Start(object sender, EventArgs e) 

private static void AssignBusinessLogicLayerHelpers() 
    var context = HttpContext.Current; 
    if (context == null) return; 

    var page = context.Handler as Page; if (page == null) return; 

    // assign the resolve url method 
    if (Utility.ResolveUrl == null) 
        Utility.ResolveUrl = page.ResolveUrl; 

    // assign other helper methods here... 

Here we've elegantly passed a generic delegate to our Utility class that will reference back to the provided ResolveUrl method. The method stamp deals with primitive types only (strings) so we have no dependencies passing this function pointer across the domains.

The Why:  

You do not want to have a BLL or similar class library system utilizing System.Web, or System.Windows.Forms for that matter. If you have a solution with a web service, a web site and a common class library (BLL or other) and you start using methods dependent on HttpContext.Current you are going to run into issues as this context is not the same between the web service and your web site. A common scenario might be a LINQ2SQL Business Object Layer on top of your SQL Database which has effectively become your BLL. You may have classes that need to store virtual, resolved paths into the database but you want to place the resolving of the URL inside your business object classes to enforce a  strong encapsulation of the class. Here's an easy, 5 minute way to provide such features to your business objects without breaking your nTier application model. Now you can happily store the AvatarUrl of your users in the database and have your aspnet_User class do the resolution on the fly!

Saturday, October 17, 2009

FIXED: Windows could not start the Subversion Apache on Local Computer

The Error:

Windows could not start the CollabNet Subversion Apache on Local
Computer. For more information, review the System Event Log. If this is
a non-Microsoft service, contact the service vendor, and refer to
service-specific error code 1.

The Fix:
Open the file C:\Program Files\CollabNet\Subversion Server\httpd\conf in notepad. If you installed to a different directory or you're not installing Apache via/ CollabNet navigate to [Root of Apache Install]\httpd\conf and open in Notepad.

find the following lines near the top of the file.

# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the
# directive.
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses (
Listen 80

Change the red line to a port number you're not using. If you don't know one, pick an oddball and change until it works.
#Listen (this is a comment, leave it alone)
Listen 31337

The Why:
I ran into this error while installing the Apache 2.2 server on my local Windows Vista Machine during the CollabNet Subversion install process. The error reared its ugly head when I tried to actually run the Apache service that was installed. The problem, which I'll freely admit, is I've installed so many software applications for Windows I am prone to accept the default install settings and fix things later if need be. Obviously, when you're trying to port the Apache server to Windows a little more care may be needed than when you're installing a simple Windows application. This little issue popped up because I accepted the default value of Port 80 to use with the Apache server. Oops. Needless to say, as an Asp.Net developer I already have that port in use.

The Event Logs provided the real information I needed to solve the issue.

Error in System Log: 
The CollabNet Subversion Apache service terminated with service-specific error 1 (0x1).

Information in System Log: 
The CollabNet Subversion Apache service entered the stopped state.

Error in the Application Log:
The Apache service named  reported the following error:
>>> (OS 10013)An attempt was made to access a socket in a way forbidden by its access permissions.  : make_sock: could not bind to address

Eureka! Yes, that would cause a problem. It's times like these when I'd like to perform some aerodynamics testing on my laptop...

Software Developer Browser and Plugin Choices

I do a lot of Asp.Net development and I use Firefox as part of my development toolkit when diagnosing performance and design issues with any website I'm working on. I'm not going to touch on various methods of developing a website as I feel that's akin to trying to change someone's religion. Most of the tools I use however are software development platform choice independent.

Firefox really shines as a development tool when you start utilizing some of the powerful developer-minded plug-ins that are available. I'm going to take the long view of some of these plug-ins in this post then drill down into some of the various features of some of the more comprehensive tools in future posts.

Yahoo! YSlow: I love it when powerhouses like Google, Yahoo, and Microsoft give back to the developer community by providing some free and useful tools. YSlow is a monster albeit a well designed one in that it's features aren't invoked until you ask for them thus keeping your average browsing experience relatively quick. This tool is indispensable for analyzing website speed bottlenecks and consequently design (layout) issues. Yahoo's YSlow relies on another must-have tool Firebug. Again, I will go into much greater detail about these tools soon but if you're not currently using them I highly recommend checking them out. They will make your website development a brighter world and neither are specific to Asp.Net.

Skynet's HTML Validator: Even if you're not a standards compliant centric developer (shame on you) this tool is invaluable. There have been countless times I've run into a layout or design flaw that I couldn't figure out even with FireBug. Sometimes the issue is, quite simply, that I have an artifact in my outgoing XHTML that is propogating down the DOM model to create very undesired results. The HTML Validator is a great tool for catching these flubs.
If you are a standards minded developer and strive to add the...

Valid XHTML 1.0 Transitional

(or similar compliance) logo to your website then this tool offers huge time saving by bring the validation service to your local machine via Firefox.

Colorzilla: This is a handy tool for fetching HTML color codes in various formats. Although not a power tool in terms of performance analysis I couldn't live without this tool and use it daily.

LiveHttpHeaders: When I need information on what is being posted/received in the HTTP headers this is where I go.

IE Tab: Although I do my personal browsing and development testing in Firefox my target audience is typically an Internet Explorer browsing crowd. This tool is great for switching your gecko rendering engine over to IE to take a look at what the customer will see. I'd rather not load up IE 8 just to take a look at each web page I've designed to make sure it looks the same as my Firefox page, this is how. Also, if you've ever been to an ActiveX dependent web page, having filled in form information only to get stopped because the page requires IE, this is a great way to switch over, refresh the page and not lose everything you've typed in up to that point.

FireShot: I use this extensively for documentation provided to clients of the websites I'm working on. The ability to grab complete screen-shots of a web page, not just the portion visible on the screen/browser, is invaluable. It also provides powerful annotation/editing tools to make documentation simple and easy.

ShowIP: A nice quick tool that keeps me from having to dig up the IP Address of the server I'm working on.

Extended Statusbar: This is a useful one-stop shop for page download and render times.

Clear Cache Button: O.K., not really a developer tool but install it and try living without it. Often I need to clear the browser cache to ensure I'm looking at my last minute development changes.

Google Chrome: While Firefox will probably always remain my workhorse for development testing and debugging sometimes I need a light-weight browser just to take a quick test run through some code changes. Please note: Firefox is a very lightweight browser until you load a ton of plug-ins into it. As a contract software and web developer I do a lot of work on laptops when I'm not at home. When I'm working on a less than hardy development workstation such as a laptop I'll use a default, no frills,  install of Chrome to view and test website changes with. It carries a much lighter footprint than Internet Explorer of course, while also providing me with glaring design flaws under the WebKit rendering engine. The WebKit rendering engine is also used by Safari so if you don't have a Mac lying around somewhere this is a simple and easy way to get a look at what your Mac based audience is typically seeing. Please note that Chrome and Safari do not run parallel compliance to the WebKit versions so Chrome is not a replacement to Safari targeted development. So, until Firefox (or a plug-in) makes profile switching a breeze, allowing me to mindlessly run a stripped down version or bloated version of Firefox with ease I will continue to have a place for Chrome in my toolkit.

That's all for now. Soon I will do a much more detailed write up of FireBug and YSlow.