Cross Database Support

ADO.NET tried really hard to solve the cross database support problem. And the 2.0 version (or so), with System.Data.Common namespace does a pretty good job. But when I tried to support SQL and MS-Access, here is what I ran into:

Connection string management is a pain. If you are configuring an app to support MS-SQL and MS-Access (for a library app, in my case a hit counter), you need up to 6 connection strings:
1) Oledb Access – Because this is the old Cross-DB API of choice
2) ODBC Access – Because Oledb is deprecated & the new cross-DB API of choice
3) SQL Oledb – Same template, different provider
4) Native SQL – Some things have to be done natively, such as bulk import.

I need something more than a connection string builder, I need a connection string converter. Once I have the SQL native version, I should get the OleDB version and the ODBC version for free.

Next– ADO.NET doesn’t make any effort to convert the SQL text to and from one dialect to another, not even for parameters. So I write this code.

Cross DB When You Can, Native When you Have To
Some application features just require really fast inserts. For MS-SQL that means bulk copy. For MS-Access, that means single statement batches and a carefully chosen connection string. The System.Data.Common namespace lets you use factories that return either OleDB or native, but once they are created it is one or the other. What I wish there was, was a systematic way of the code checking for a feature and if it has it, use it, if it doesn’t fall back. Obviously this sort feature testing could be a real paint to write for some features, but for things like, say, stored procedures, why wouldn’t it be hard to check for stored proc support and when it exists, create a temp or perm stored proc to execute a command instead of just raw sql? I haven’t really figured out a way to implement this feature.

Are you Really Cross DB Compatible?
Of course I am. After every compile, I stop and test against all 14 database providers & configurations. Yeah right. If the application isn’t writing to the DB right now, I’m not testing it. So after I got MS-Access working, I got SQL working. MS-Access support broke. Then I got MS-Access going again. Then they both worked. Then I added a new feature with MS-SQL as the dev target. Then MS-Access broke. And so on.

ADO.NET executes one command against one database. What I need to prove that I have cross DB support is “mulit-cast”. Each command needs to be executed against two or more different databases to prove that the code works with all providers. And this creates a possible interesting feature of data-tier mirroring, a feature that usually requires a DBA to carefully set it up and depends on a provider’s specific characteristics. With multicast, you can do a heterogeneous mirror– write to a really fast but unreliable datastore and also write to a really slow but reliable datastore.

I plan to implement multi-cast next.

Sites I am migrating

I created them in spare time and they add up after a while:

.NET Efforts
http://tokipona.net/ – .NET
http://tokipona.net/tp/janpije/ – A static mirror I’m hosting for jan Pije
http://wordgenerator.wakayos.com – a front end to a word generation tool
http://gloss.wakayos.com – Helps generate linguistic interlinear gloss formatting
http://locavoredc.wakayos.com – A .NET wiki that has some stale info about how to be a locavore in the DC area.

PHP Efforts
http://polyglotdc.suburbandestiny.com/ – A directory of language resources in DC.
http://learnicelandic.net/ – Not sure what to do with this. It is a wiki right now.
http://learnicelandic.net/twitter – a content site that is essentially a blog post about using twitter for foreign language learning
http://learniceland.net/join – a landing page I used for a google ads campaign for my icelandic meetup.

And that is about it.

Customizations I used with Elmah

Elmah isn’t especially secure if assume the error log itself has already been breached. Even if it hasn’t been breeched, sometimes Elmah logs things that the administrator doesn’t want to know, like other people’s passwords.

There are some reliability issues too.

1) Don’t log sensitive data.
- Some data is well known, e.g. HTML headers
- Some data is not well known, textboxes were you enter your password
- Viewstate for the above
2) Don’t refer to DLLs that won’t exist, for fear that dynamic compilation will fail due to a reference that can’t be found. For example the sqlite. I understand why the main project is set up this way though– the goal was to minimize the number assemblies distributed and still support lots of databases. This could also be a non-issue. Assembly resolution, for me, has always been black magic.
3) Override Email to use Apps config, insted of Elmahs config sections in the ErrorMailModule. I don’t like doubled config settings, where my app has a setting and so does the component.
4) Use Apps role system and PrincipalPermission to restrict display to certain roles
- Add PrinciplalPermissions to all classes that view things (but not classes that log things), see end for a list. If you don’t trust your server admins to keep from messing up the web.config, you can put the role checks right into the code: This set worked for me.
5) Stengthen XSS protections.
Change Mask. and HttpUtility.HtmlEncode to AntiXss.HtmlEncode. This creates a dependency on either the AnitXss library or .NET 4.0.
6) Add CDATA to javascript blocks
7) Switch to READ UNCOMMITTED. The error log must not cause errors (i.e. deadlocking)
SqlErrorLog.cs
8) When error log gets really large, it has to be rolled over and truncated to prevent locking issues. This at least was a problem in SQL 2000 and I think SQL 2005.

List of classes that could use a security attribute, should you choose such a strategy.

AboutPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDetailPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDigestRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorHtmlPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorJsonHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogDownloadHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorXmlHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]

Custom Exception Antipatterns

try{ } catch(SomeException)
{ throw MyCustomException(“Error in middle tier, method foobar()”);}

The above is wrong for multiple reasons. The text only tell you some stack trace information. Unless you know for sure that your error logging infrastructure or error reporting discards the stack trace and you can’t fix it, then don’t put stack trace information in your error. The default yellow screen and Elmah both capture stack trace.

The next reason the above is wrong, it overwrote the error. The error used to be something specific, e.g. SecurityException because you had the wrong NTFS permissions, for FormatingException, because you had two decimal points. But now the error is overwritten with “something bad happened”

try{ } catch(SomeException e)
{ throw MyCustomException(“You need to run batch file foo.bat”,e);}

The above pattern wraps the exception. But error loggers don’t always display the internal errors, especially if there are multiple internal errors. Don’t wrap errors unless they are providing something remarkably valuable or unless you are *actually* writing code to specifically trap this sort of error on the next tier. (planning or thinking you might in five years doesn’t count)

In ASP.NET don’t catch unless you feel pain from not catching.

In winforms, you have to catch, else the application exits. But in ASP.NET, only the page request ends. From the user’s standpoint, the application is still running because the next request can still succeed.

It’s not engineering. It’s school yard psychology

#regions are where we hide our shame and self loathing from not being able to refactor our class into anything smaller than 2500 lines of code. We only tell ourselves that we are being tidy so the self delusion does seem so obvious and depressing.

xml comments are where managers say “Get moving on documentation!” and developers respond by shuffling from foot to foot to show some movement.

The same people who write empty xml comments and add regions to their massive classes are the same people who ignore compilation warnings.

Microsoft is kind of like a parent with multiple, perverse personalities. They tell us to not use “autoeventwire up” because it’s a bug generator, then it is the default in Visual Studio.

Validation Application Block Post Mortem

VAB seemed to work only for simple domain restrictions on a single property.  This is the least interesting, least value providing sort of validation.  At worst It is a 2 line if/then block of code! Replacing it with an attribute saves 1 line of code and creates some design rigidities.

If you had complex logic (and it didn’t have to be very complicated), one was tempted to write complex expressions in attributes. This is ugly programming.

If you had complex logic, you were supposed to write custom validators.  Contrary to this blog post, which says, “This is very easy to do”, this is not easier than writing a Validate() method– at the very least, it’s 100s of lines of extra code.

The custom validators tend to be a lot like
List<string> Validate(MyObject someObject),
which is the way things were before attribute driven programming.

The codebase began to evolve to use VAB less and less.

Today I’m removing the assembly.

Picking .NET tools when you want to collaborate

Something as simple as solution folders can stop you from collaborating with anyone except Visual Studio Professional.

Strategies for the IDE
Target the largest crowd.  For example, everyone with VS2005. A good percent of those people only have a licensed copy at where  work and won’t have time to dink around with an open source project, unless it happens to be solving a commercial problem for them.

Target multiple crowds.  Eg. distribute with #ifdef statements for 1.1,2.0,4.0, Mono and six different solution files and project files targeting VS.NET, VS2003, VS2005, VS2008, VS2010, and monodevelop. The extreme version of this strategy is crazy.

Target the poor. Visual Studio Express, Nant + notepad, csc.exe + notepad, WebMatrix, CodeRun, MonoDevelop. Ok, but some of these have really low productivity characteristics.

Hybrid strategies. Make sure the largest crowd can contribute and maybe one subset of the poor or different, such as the previous release of VS or Visual Studio Express.  This will require periodically opening the solution in the less expensive environment to make sure that you aren’t using a feature the other IDE doesn’t have.

Strategies for the rest
git hub or CodePlex or what? Use one site for bug tracker, source control and everything or pick your favorite bug tracker, source control, etc.

VS2010 Express + SVN/Git + Team City
You can create DLLs, WebForms, MVC, Silverlight widgets and WCF services. So it’s kind of like being in a ‘all-you-have-is-a-hammer’ situation, so these are your hammers.

VS without Ankh is a bit trickier, you’ll probably just use tortoise. You will need to know on your own to not check in /obj/  .cache, .suo, maybe not /bin/

Codeplex has an SVN bridge. Cool.

Since you can create DLLs, you can create unit tests separate from code-under-test.  At the moment, all I can find the nUnit GUI test runner. I’m not seeing a way to add commands to the menu in VSExpress, but you can add post-build steps that will call the nUnit GUI test runner. On the otherhand, you wouldn’t want that to run in the build server, because the build server would have it’s own unit test runner.  So again, it’s just a non-integrated experience.

MaxiVista and Visual Studio 2010 and my computers

I have Maxivista version 4.0.11 downloaded in December 2009.  It is different from the one downloaded in July 2010.  I don’t know if that makes a difference.

The Dec 2009 version lets Visual Studio 2010 run on 2 monitors, but not 3.

The July 2010 version will not let Visual Studio run on 2 monitors– you get constant flashing, disconnect-reconnect cycles (same as trying to take the Dec 2009 version to 3 monitors)

When rolling back, the old installer refused to run (claimed it was outdated) and you had to reset the system clock to get the install to complete.

Oh well.

The demo version is .12, but re-download seems to try to force you to get only the point version you originally bought, which is odd because the site seems to imply that you can get point updates for free within a major version.  I’m waiting for my phpBB credentials so I can ask on their support board.

A build server for home

I’ve got a WHS.  So I just finished installing:

Subversion vs VisualSVN.  Used AnkhSVN and checked in my code.  Installed WAMP. Found out that my windows home server for some reason had a PHP installation, so I had to update the environment variables to point to the WAMP installation.  I finally got VisualSVN to work with webSVN with windows auth.  It was tricky, required .conf file reading.  Using cygwin I got websvn working.

TeamCity.  Didn’t have to install Visual Studio, but did have to copy C:\program files\MSBUILD files.  It took an hour but now my unit tests and code coverage tools are all running.

Issue Tracker. Team City integrates with only 3 bugtrackers, of which only one is flat out free.  So I’ll be trying out Bugzilla, although several other bug trackers look more interesting.  Update: In frustruation, I gave up on Bugzilla because Perl is for people who enjoy configuration and installation pain and chewing on glass and poking their eyes with ballpoint pens.  I tried next the ruby and rails app “Warehouse”, but the @#$@#$ db scripts refused to run and I realized I didn’t have the patience to build my own inter-op doohicky for ruby to svn communication.  So on to mantis and flyspray.  Superficially, they’re closely matched.  But mantis has plugins and flyspray is more familiar to me because I’ve used it many times before.  So much work just to find out that free tools for developers often come with a heck-of-a installation burden.

Sharepoint homepage.  I’m still trying  to make WSS3.0 sharepoint useful.  So far only the link list is useful, but I may see about taking the RSS Feeds from everything above and creating a RSS reader in WSS3.0 to agregate my feeds which are essentially all intranet feeds.  If it frustrates me too much I’ll install a php base feed reader.

Stop! In the name of code!

Stop! In the name of code

Before you break the build!
I’m aware of where you go
Each time you check in code
I watch my rss feed of the daily build
Knowing your code turns green dots red
But this time before you run the build
Making us do more work
(Think it over) Have you done a unit test or two ?
(Think it over) Is it refactored through ?