How hard is it to make SQL Express easy?

Goal. Make it easy for a end user, say with half as many brain cells as a mollusk, to install a DB driven website.

Particulars. A typical database set up requires creating the database, setting up the logons and users for the anonymous web user, the ASPNET web user and maybe an application user or role. The user then needs to run a TSQL script to install objects. Finally the user needs to set the connection string in the web.config.

Solution so far. If the user has an existing database, user and connection string and can put it in the web.config, I can run the TSQL scripts for him and even detect and create the application user.

Speed Bump. Wouldn’t it be easier if I created everything in advance and put it into App_Data and connected to it using an user instance? Well, so one would think. It would mean you could use a mdb file without knowing: the server name, the credentials, the file location (except that it is in the usual App_Data folder)

Here is the magical connection string:

Data Source=.\SQLExpress;Integrated Security=True;User Instance=True;AttachDBFilename=|DataDirectory|calendar.mdf

Here are the magic error messages (short list, I forgot to copy some of them down)

Invalid value for key ‘attachdbfilename’.

Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed.

“No connection could be made because the target machine actively refused it”

Unable to open the physical file “C:\Inetpub\foo\App_Data\aspnetdb.mdf”

Random things tried:
Don’t create the mdb file using the “Add New Item” menu in Visual Studio (create DB the old fashion way with Management Studio)
Change .\SQLExpress to actual server name, eg. MyBox\SQLExpress
Changed |DataDirectory| to the actual physical directory.
Grant rights to NETWORK SERVICES, LOCAL SERVICE, MyBox\ASPNET to modify files in App_Data folder (preferably only to the account that the anonymous user is running as, not all three of them)
Switch from IIS to ASP.NET Development Server (or other way around)
Delete files found at:
C:\Documents and Settings\[some user name]\Local Settings\Application Data\Microsoft\Microsoft SQL Server Data\SQLEXPRESS
The above folder holds the various databases that SQLExpress creates upon creating a User Instance
Don’t use Remote Desktop. When you run a user instance across remote desktop, it is hard to guess what User profile the various system databases will be written to.

Advice- User Instance, just say “=false”.
User Instances are bad. Bad bad bad. They might be okay in a windows application that you are running on a single disconnected machine in a salt mine mile below ground.

This is only going to work if you have administrator rights. You typically will not give your anonymous account admin rights to the database. So right away, we can see Integrated Security=True and AttachDBFilename=… do not go together…unless you are using windows authentication. A brain damaged mollusk doesn’t know how to set up windows authentication, less so on a hosted account where user admin tools are often crippled and incomplete (I’m thinking of the lunarpages control panel here)

Furthermore, |DataDirectory| doesn’t always resolve.

So what is left? We have a file that isn’t attached, that ADO can’t find, and we need a priori information about the SQL instance name and a priori information about the user ID, password, and database name. Sigh. Thanks Microsoft. Not a single break.

Final Solution
Half brained mollusks will have to use conventions. First, assume the server name is “localhost”, second the user will have to find out what the credentials are and hope the credentials have dbo. Finally, the user will have to know what the db name is.

Worse, the user will have to be able to edit the web.config file. I’m now going to work on a web.config generating page, so the user can enter the five magic words and get a web.config file generated for him.

Blogged with Flock

MS SQL Server: TSQL Float & Real Considered Dangerous

Sums of floats can change depending on sort order.

Comparison tools (SQL Redgate in my case) can report two rows are different because some numbers don’t have exact decimal representations, even though these are the same. Likewise, retrieving rows based on a float value is iffy, sometimes requiring that you search for a small range to return the row with a particular float value. This makes testing more complicated, since you get false alarms when comparing the results of two float operations.

Redgate SQL Compare also has an Overflow = 0, bug. That is, it converts the float to decimal to compare to decimal, one overflows so it decides overflow equals zero.

I’m not picking on Redgate here specifically, these are bugs that could happen anywhere in the development stack (at the TSQL, ODBC, application or third party tool level) when you are using float or real.

My advice is to use the decimal data type unless the decimal data type is failing and there is some compelling reason to use float/real. And if you do use one of these, pick real since it is twice as precise as float.

Installing Oracle Drivers: A Heuristic Approach

Oracle, n. Database named after people who spent all their time sucking fumes and spouting nonsense.

Installation Phase. Download Oracle Express, Oracle ODBC, Oracle JDBC, and anything else that might have a functioning installer. You must be initiated into the brotherhood of those who receive spam from the Oracle. It’s kind of like hearing the voices of the gods in your head, but it’s comes by email.

Sacrifice. Sacrifice a lackey to the Oracle. The Oracle is an angry god and must be appeased. The version number indicates how many the Oracle demands.

File Search Phase. Find the listener.ora, tnsnames.ora, sqlnet.ora. There may be many, you will never know which one an active driver may be using.

Ritual. Reboot the server, your workstation, stop and start all services, kill all tasks and restart them as well. Sacrifice an intern as well. The Oracle demands blood.

Network Test Phase. Tnsping proves that an application can move electricity from here to there and get electricity back. Given the dim hope of actually getting data back, researchers are working on ways to harness the returning electricity to power office lights and pencil sharpeners.

Test Phase. Sqlplus may or may not end up on your machine. Sqlplus is the preferred way to deal with the Oracle as the tippity tappity of keys strokes on a command line are pleasing to the ears of the Oracle. If sqlplus is not on your machine after installing drivers, return the Oracle and keep downloading random crap until sqlplus appears on your machine.

Trace Phase. When trouble shooting a linked oracle server on SQL2000, to get better error messages, you need to run “DBCC Traceon(7300)”

Alternate technologies. It is possible that the oracle driver works, or has better error messages when you use other technologies. Write a series of applications using ODBC, JDBC, OLE-DB that define a connection string, attempt to connect, and either display the error message or a success message. If possible, use the “tnsnames.ora”-free connection string. If you get a “tnsnames.ora”-free connection string to work, you can use that to update the tnsnames.ora file. This is was the technique I used to finally get a successful connection.

Shock. I can’t believe Oracle drivers are this crappy.

Denial. I must be doing something wrong. This isn’t happening to me.

Bargaining. I give you a chocolate bar if you can fix this tnsnames file for me?

Guilt. I’m a talentless hack. I’ll never connect to the Oracle. This is the end of my career.

Anger. Damn it, I’m going to go over to Larry Ellisons house and make him configure this damn driver!

Depression. Screw it. I the project is going to fail as soon as the dev team finds out the drivers are impossible to install. Sigh.

Acceptance. Oracle sucks. Data in oracle is inaccessible.

Hope. And least there are a lot of other relational databases in the world that do have drivers that just work. Even companies as big as Oracle sometimes go out of business, never to be heard from again. Ah, it feels good to dream.

SQL Style vs AzMan Style rights granting

SQL Style
You can call a method (stored procedure) that access resources (views and tables) on your behalf, but you can’t always access the resources directly. SQL Procedures almost analogous to AzMan task, and tables are analogous to operations. How ever, an AzMan task implies you have access to all the components of the task directly as well as through the task.

AzMan Style
If you can call any method (task) that uses a resource, you can use that resource directly.

Both styles, make sense, why doesn’t AzMan support them both? Well, in a sense, you could simulate SQL style by making all stored procedures ‘operations’ I think the only disadvantage is that if there is an action, say sending an email, which is implicitly available in one operation, but not another, it might be confusing. For example, if you have a task called, ‘contact customer’, which is make up of ‘lookup address’, ‘send email’– you might decide to change ‘contact customer’ to an operation, since you don’t want the user to ‘send email’ except in the context of the ‘contact customer’ task.

Review: Google Gears

Ok, I think I grok it now. Google Gears is a huge cookie that can store files and SQL data. It also acts as a web server should the real internet be down. The API is all JavaScript.

The only caveat I see so far is getting people to install the Google Gears plug in. For MSIE, you get warning after warning about the plug in. You also seem to need to be an administrator to install the plug in. Without a sophisticated user base or a sympathetic IT department, the ActiveX component installation could be a show stopper.

As for integration with ASP.NET, ASP.NET has never been very friendly to JavaScript developers. ASP.NET changes the element ID’s, so you will need to code generate some JavaScript with <%= ClientId %> golden nuggets each time you reference a controls ID.

JavaScript counts as an out of bound call

MS Access as a Linked Server

I needed to simulate an Oracle server, but didn’t like the idea of trying to install and run Oracle Express. So I created a linked Server pointing to an Access MDB with tables linked to a SQL Server, which had tables dumped from the Oracle Server. This allows for OPENQUERY queries without the “db.dbo” prefix that you’d need if you’d just referenced the SQL database directly. You know a database is hard to use when you can’t convince database professionals to use the free copy installed on their machine.

These are the things I did to get my lined server working:

  1. Switch connection string from SQL authentication to Windows Authentication.

Without that, you get can’t find installable ISAM or authentication errors.

What didn’t work

  1. Giving more rights to IUSER and ASPNET account (this was only failing in my web app, not in SQL Server Management Studio)
  2. Setting the password in the provider string.

MSDTC Considered Harmful

MSDTC creates transactions between two database servers, maybe running on different machines, maybe running on different database engines. That way when you mark down Al’s account in one server for $10 and mark up Bill’s account in the other server for an increase of $10, we don’t end up in weird situations where Bill has $10 and Al still has $10, or Bill has $0 and Al has 0$.

But what if you live in the real world and you want to transfer 1.21 jigabytes of data from server A to B. First of all, this would create explosive log growth and crash your server. For some reason, SQL Server wants me to involve MSDTC when I do a INSERT INTO ServerA.mydb.dbo.mytable SELECT * from ServerB.mydb.dbo.mytable

MSDTC has security problems. It uses RPC, lots of ports and as such MS turned it off in Windows 2003. Re-enabling it is not as easy as flipping in a switch, you have to get into the constipated bowels of the operating system and edit the registry, services, COM+ settings and get your hands covered in binary smoo. After this, you reboot the computer and it still won’t work. This process of trial and error takes more hours than you want to spend on it.

I also tried turning automatic transactions off, no such luck.

Fortunately, a SELECT doesn’t start a transaction and using the ADO.NET SqlBulkCopy
class will not start a transaction. It is very fast and seems to be smart about data types, so a SELECT * works as a source query.

SMO vs TSQL vs Batch vs .NET vs VBS

The prototypical application I have in mind is the backup script, which involves launching a few processes, running a few TSQL commands like “BACKUP” and doing some file system shuffling, like checking for the existance of a file you intend to write, deleting old backup files, moving them to a safe place, etc.  To get anything done, you find yourself doing 3 crimes against computer science:

Code writing code to make up for missing features in the host language.

Parsing untyped output.  The output of DIR is unparsed.

Domain overspecialization.  TSQL does good with tables, it stubles when you need to write a batch script in TSQL.  Batch does good at launching processses, it fails when you need to do loops or handle errors. .NET has a wonderful library, but you have to roll your own 5 line process launcher.

In batch, you can launch a process in 1 line of code:

osql.exe {args}

In TSQL, you can launch a process, BUT, it is just a xp_cmd_shell command, so really you are writing batch and if you have long strings of xp_cmd_shell commands you should write batch.  However, TSQL will put the data into a single column table, so if the output of the batch command is one data element per row, it nicely maps to a table and you can use it as a table.  Still, you have to use TSQL’s string functions to build the command, there isn’t a independent parameter, data returned isn’t typed (it is a table of strings, not a collection of files or some other class).

SQLCMD.  SQLCmd is a SQL2005 tool that is a half measure to make some TSQL scripts easier to write and run, especiall when they need to be execute in different database contexts.  Previously this had been done by writing dynamic EXEC(“use db EXEC myCommand”) type scripts, which were not fun to write–again due to the lousy string handling of TSQL.

The resulting code is clunky and slow to write, a hybrid of a not very good procedural language with something that was never meant to be used as a programming language at all.

VBS has late binding.  This feature is hardly ever used for anything clever, mostly it exists to cripple the language and make it unfriendly to intellisense supporting IDE’s. COPY, DIR, DEL takes many lines of code.  SQLDMO exists, but why put yourself through the pain when you could write VB6 in the uncrippled way?  Well maybe because the environment is COM unfriendly.

VB6. It takes a couple of lines of code to launch a process and it is fairly hard to get the output of the command.  SQL DMO did exist to get object oriented access.  This requires an environment that is COM friendly.  Why deal with COM at all when you can use .NET and not worry if you have the right to register a COM dll?

.NET and SMO. The prototypical backup script in SMO requires instatiating a bunch of objects in a typed language and relying on the .NET framework for the file system shuffling.  In general, it takes more lines of code than the corresponding batch command, eg. COPY, DIR, DEL.  Still it is better than VB6.

WMI. WMI is just so damn hard to use.  I won’t talk about it. It makes me dizzy.WMI via .NET is better, but man you just never know what kind of object you’ll get back when you talk to a WMI object.  Calling everything in the world an object and accessing everthing with a .getProperty(string name) and  .invoke(string name) methods is not really typed programming.

Powershell.  I’m just starting to learn about Powershell.  It looks like the goal is to make batch into a real programming language, with the conciseness of COPY, DIR, DEL, the library of .NET, and some real programming language features. 

Powershell you’re my only hope.

When to use an Object Database (Or ASP.NET Personalization)

ASP.NET personalization takes your Profile object and serializes it and dumps the binary goo into a SQL table. This breaks the normal rules of normalization. You’re supposed to create a normalized table with one column for each property of your profile object, something like

PrimaryKey Int,
PreferredTheme Varchar(50),
PreferredBackgroundColor Varchar(50),

However, the ASP.NET 2.0 framework doesn’t know in advance what thousand of developers are going to think important enough to set up as a property of the profile. Worse, you can have collections as properties, which should be relationally represented as separate tables. To keep relational, the ASP.NET team would have had to issued ALTER TABLE, ADD COLUMN, DROP COLUMN, CREATE TABLE, etc. commands as the web.config file was updated. So we can see that random website personalization bits like PreferredTheme should be serialized and put into the database as a binary, non-relational object. How about putting orders, addresses, payment histories and the like into the Profile? No! That is a crime against Codd.

So here are some suggested rules about when it is okay to serialize an object and dump it into a table without do proper relational modeling:

  1. The object is mostly standalone and has few if any foreign key relationships with other tables. E.g. a Profile has a user relationship and that is it.
  2. The object will not queried in a fashion that returns collections of objects. E.g. we only fetch one Profile at a time, so we aren’t really returning a table’s worth of data.
  3. We don’t have the data in the Profile replicated anywhere else. (That is, if we have a customer table, don’t put customer name into the Profile, or you will have to do two updates to update one entity)
  4. We never will need to aggregate the Profile data. Actually this could happen with some Profile data. Finding out if the YellowOnWhite theme is being used more than WhiteOnBlue maybe be useful in deciding which theme to kill or expand. Extracting that data would require a VB.NET program instead of a simple TSQL statement. On the otherhand, last page visited, we probably will never aggregate.

Reporting Services and Dynamic Columns

If a field is not in a result set, then you can still define it manually. The report will not raise an error if that field is not in the executed result set. HOWEVER, there appears to be no way to test for that condition. IsNothing(Field!MyField.value) doesn’t appear to return anything, although some one on Experts Exchange thought so. I tried many variations on it. Aparently, if the expression tries to evaluate an expression with missing column, the whole expression evaluates to a blank or zero or sometimes ERROR#, depeding on the partiucular expression.

Other sites recommend dealing with dynamic columns by referencing a parameter, which knows which columns are appearing:


This is like trying to embed the result set’s meta data into the parameter set. Ugh.