Compiler Error 128

Many things cause compiler error 128. Ref, here.

Sometimes reregistering aspnet with iis works.

In my case, I had attached a console to a running asp.net app. Then I uploaded the correct release build over the top of that (the release build doesn’t attach a console) and then I got compiler error 128. It cleared up on iisreset. If in doubt, pull the power out.

Getting WCF to talk ordinary HTTP to a browser

This is an exercise in driving nails into the coffee table with your shoe. The goal isn’t really all that obviously beneficial and the tool isn’t the expected tool for the job. WCF wants to speak SOAP to SOAP aware clients. With the expansion to support a REST API with System.ServiceModel.Web, you can get a WCF service to talk to a browser. HOWEVER

* The browser doesn’t serialize complex objects to a C# like data type system on Request or Response. Instead you deal primarily in a raw Stream.
* Some browsers don’t speak XHTML (they will render it if you call it text/html, but MSIE will render xhtml/application as XML), so you can’t just return an X(HT)ML payload.
* WCF used this way is a “bring your own view engine” framework. I chose SharpDom for this exercise. It seems like it should be possible to support returning a SharpDom return value that serializes to XHTML with a type of text/html, but I don’t know how to do that.
* MVC already solves a lot of similar problems.

BUT with WCF you get some of those WCF features, like umm, well, when you have a browser client a lot of features aren’t avail (e.g. fancy transaction support, callbacks, etc), but you can still do fancy thinks like instancing, and supporting a HTML browser, JSON and WCF interface all on top of mostly the same code.

Just serving a page is fairly easy. Turn on web support in the config (same as any REST enabling, see end of post),

[WebGet]
public Stream HomePage(){ 
            //Return a stream with HTML
            //... I have skipped the view engine, I used SharpDom
            MemoryStream stream = new MemoryStream();
            TextWriter writer = new StreamWriter(stream, Encoding.UTF8);
            new PageBuilder().Render(model, writer);
            writer.Flush();
            stream.Position = 0;
            return stream;
}


What will the URL look like? Well in devepment in Win 7, if you don’t have admin rights, it will be something like:

http://localhost:8732/Design_Time_Addresses/HelloWorld/web/HomePage

The http://localhost:8732/Design_Time_Addresses/ is the address that a non-admin can register. It looks like you can’t register 8080.

The /web/ part is because in my endpoints in config (below), the endpoint is “web”

Also notice you have to set an encoding (and I suppose you’ll want that to match what the HTML meta tag says)

[WebInvoke(Method = "POST")]
public Stream AnotherPostBack(Stream streamOfData)
{
StreamReader reader = new StreamReader(streamOfData);
String res = reader.ReadToEnd();
NameValueCollection coll = HttpUtility.ParseQueryString(res);
//Return a stream of HTML
}

To invoke the above, use an METHOD of POST and an action of

http://localhost:8732/Design_Time_Addresses/HelloWorld/web/AnotherPostBack

And finally, use a web friendly host in your console app

using (WebServiceHost host = new WebServiceHost(typeof(HelloService)))
{
host.Open();
Console.ReadLine();
}

http://stackoverflow.com/questions/1850293/wcf-rest-where-is-the-request-data

Also, you can post back to this kind of operator… but for the life of me I can’t figure out how to get the Content. I can see the headers, I can see the content length, but I can’t get at the stream that holds the post’s content.

(This StackOverflow Q & A implies that to get the raw content, you have to use reflection to inspect private variables: )

[OperationContract(Action = "POST", ReplyAction = "*")]
[WebInvoke(Method = "POST")]
public Stream PostBack(Message request)
{
}

Obviously, cookies and URL params are just a matter of inspecting the IncomingRequest.

And the config:

<system.serviceModel>
    <services>
      <service name="WcfForHtml.HelloService" behaviorConfiguration="TestServiceBehavior">
        <host>
          <baseAddresses>
            <add baseAddress="http://localhost:8732/Design_Time_Addresses/HelloWorld"/>
          </baseAddresses>
        </host>
        <endpoint address="web"
                  binding="webHttpBinding"
                  contract="WcfForHtml.HelloService"
                  behaviorConfiguration="webBehavior">
        </endpoint>
      </service>
    </services>
      <behaviors>
        <!--SERVICE behavior-->
        <serviceBehaviors>
          <behavior name="TestServiceBehavior">
            <serviceMetadata httpGetEnabled="true" />
            <serviceDebug includeExceptionDetailInFaults="true"/>
          </behavior>
        </serviceBehaviors>
        <!--END POINT behavior-->
        <endpointBehaviors>
          <behavior name="webBehavior">
            <webHttp/>    
          </behavior>
        </endpointBehaviors>
      </behaviors>
  </system.serviceModel>

Posted in wcf

Production Trace

Assume you work in a large organization, you write code, you really would like to see some diagnostic trace from your app in Test, Staging or Production, but a server admin owns all of them. You can’t have the event logs, remote desktop access, or ask that the web.config be edited to add or a remove a System.Diagnostics section. Just imagine.

Production trace needs to be:
- high performing, if it slows down the app, which may already be under load, not good.
- secure, since trace exposes internals, it should have some authorization restrictions
- not require change of code or config files, because large organizations often have paralyzing change management processes
- support a variety of listeners that will meet the requirements above (and if those listeners are write only, then a reader will need to be written)

System.Diagnostics -File-
- Perf- Not very performant, will likely have contention for the file.

System.Diagnostics-Console, DebugString, Windows Event Log
- You can’t see it. End of story.

ASP.NET Trace.axd and In Page
- Perf- not so good.
- Security- it’s well known, so security teams often disable it
- Config- Can sort of enable on a by page/by user basis if you use a master page or base page to check rights and maybe a query string.

Custom Session Listener
Write to session.
- Okay for single user trace. Need a page to dump the results.
- Perf probably okay, could hog memory.
- Security, pretty good, by default you can only see your own stuff.

Custom Cache Listener
- Write trace to cache
- Will have locking problems
- Won’t hog memory because of cache eviction
- Cache eviction could remove trace too fast.

HttpContext.Items listener +Base page to dump contents at end of request
- Only shows one page of trace at a time
- Probably high perf.
- Won’t show other users

Cross Database Support

ADO.NET tried really hard to solve the cross database support problem. And the 2.0 version (or so), with System.Data.Common namespace does a pretty good job. But when I tried to support SQL and MS-Access, here is what I ran into:

Connection string management is a pain. If you are configuring an app to support MS-SQL and MS-Access (for a library app, in my case a hit counter), you need up to 6 connection strings:
1) Oledb Access – Because this is the old Cross-DB API of choice
2) ODBC Access – Because Oledb is deprecated & the new cross-DB API of choice
3) SQL Oledb – Same template, different provider
4) Native SQL – Some things have to be done natively, such as bulk import.

I need something more than a connection string builder, I need a connection string converter. Once I have the SQL native version, I should get the OleDB version and the ODBC version for free.

Next– ADO.NET doesn’t make any effort to convert the SQL text to and from one dialect to another, not even for parameters. So I write this code.

Cross DB When You Can, Native When you Have To
Some application features just require really fast inserts. For MS-SQL that means bulk copy. For MS-Access, that means single statement batches and a carefully chosen connection string. The System.Data.Common namespace lets you use factories that return either OleDB or native, but once they are created it is one or the other. What I wish there was, was a systematic way of the code checking for a feature and if it has it, use it, if it doesn’t fall back. Obviously this sort feature testing could be a real paint to write for some features, but for things like, say, stored procedures, why wouldn’t it be hard to check for stored proc support and when it exists, create a temp or perm stored proc to execute a command instead of just raw sql? I haven’t really figured out a way to implement this feature.

Are you Really Cross DB Compatible?
Of course I am. After every compile, I stop and test against all 14 database providers & configurations. Yeah right. If the application isn’t writing to the DB right now, I’m not testing it. So after I got MS-Access working, I got SQL working. MS-Access support broke. Then I got MS-Access going again. Then they both worked. Then I added a new feature with MS-SQL as the dev target. Then MS-Access broke. And so on.

ADO.NET executes one command against one database. What I need to prove that I have cross DB support is “mulit-cast”. Each command needs to be executed against two or more different databases to prove that the code works with all providers. And this creates a possible interesting feature of data-tier mirroring, a feature that usually requires a DBA to carefully set it up and depends on a provider’s specific characteristics. With multicast, you can do a heterogeneous mirror– write to a really fast but unreliable datastore and also write to a really slow but reliable datastore.

I plan to implement multi-cast next.

Sites I am migrating

I created them in spare time and they add up after a while:

.NET Efforts
http://tokipona.net/ – .NET
http://tokipona.net/tp/janpije/ – A static mirror I’m hosting for jan Pije
http://wordgenerator.wakayos.com – a front end to a word generation tool
http://gloss.wakayos.com – Helps generate linguistic interlinear gloss formatting
http://locavoredc.wakayos.com – A .NET wiki that has some stale info about how to be a locavore in the DC area.

PHP Efforts
http://polyglotdc.suburbandestiny.com/ – A directory of language resources in DC.
http://learnicelandic.net/ – Not sure what to do with this. It is a wiki right now.
http://learnicelandic.net/twitter – a content site that is essentially a blog post about using twitter for foreign language learning
http://learniceland.net/join – a landing page I used for a google ads campaign for my icelandic meetup.

And that is about it.

Customizations I used with Elmah

Elmah isn’t especially secure if assume the error log itself has already been breached. Even if it hasn’t been breeched, sometimes Elmah logs things that the administrator doesn’t want to know, like other people’s passwords.

There are some reliability issues too.

1) Don’t log sensitive data.
- Some data is well known, e.g. HTML headers
- Some data is not well known, textboxes were you enter your password
- Viewstate for the above
2) Don’t refer to DLLs that won’t exist, for fear that dynamic compilation will fail due to a reference that can’t be found. For example the sqlite. I understand why the main project is set up this way though– the goal was to minimize the number assemblies distributed and still support lots of databases. This could also be a non-issue. Assembly resolution, for me, has always been black magic.
3) Override Email to use Apps config, insted of Elmahs config sections in the ErrorMailModule. I don’t like doubled config settings, where my app has a setting and so does the component.
4) Use Apps role system and PrincipalPermission to restrict display to certain roles
- Add PrinciplalPermissions to all classes that view things (but not classes that log things), see end for a list. If you don’t trust your server admins to keep from messing up the web.config, you can put the role checks right into the code: This set worked for me.
5) Stengthen XSS protections.
Change Mask. and HttpUtility.HtmlEncode to AntiXss.HtmlEncode. This creates a dependency on either the AnitXss library or .NET 4.0.
6) Add CDATA to javascript blocks
7) Switch to READ UNCOMMITTED. The error log must not cause errors (i.e. deadlocking)
SqlErrorLog.cs
8) When error log gets really large, it has to be rolled over and truncated to prevent locking issues. This at least was a problem in SQL 2000 and I think SQL 2005.

List of classes that could use a security attribute, should you choose such a strategy.

AboutPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDetailPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDigestRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorHtmlPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorJsonHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogDownloadHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorXmlHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]

Opinionated Trace

The problem with trace-debug-loggers (I’ll call it trace from now on) is that anything goes. I’m going to try to use the System.Diagnostics namespace with the extensions on codeplex.

Trace is a story, it is the moment to moment diary of the application. The audience of trace is a developer. They want to know what the application is doing. Computers do a lot, so trace volume will have to be carefully managed.

Thou shall not trace in production, unless you have to.
Trace can be expensive. I took a representative 1000 repetition integration test that ran in 1/2 second and turned on verbose logging with DebugView running, and it took 17 seconds. This is why there should be some thought put into logging levels and why there should be multiple trace sources, so that most of them can be off most of the time.

Thou shall be very wary of logging to the same transactional database as the application.
Jeff had a bad experience with this kind of logging and decided to throw the baby & bath water out. I think they just needed to rethink what trace’s promise and possibilities really are.

Thou shall use a TraceSource per class.
There should be a trace source per class. Typically we’re debugging a few classes at a time and turning off the other trace by commenting it out isn’t a practical solution.

Thou shall not use System.Diagnostics.Trace or System.Diagnostics.Debug
Use TraceSource instead. You can’t turn off these sources as easily as a TraceSource

Thou shall not reinvent System.Diagnostics. Extend it. Resist using other people’s re-inventions. Do use other people’s extensions
Trace is for maintenance developers. A maintenance developer shows up on the scene and the last thing they want to see is yet another custom solution for a solved problem. How excited would you be to find a code base that shunned System.IO’s file system API and used an entirely custom one? Your app has a bug. You have one problem. You find out all the trace is written using a odd ball trace infrastructure. Now you have two problems.

Thou shall not do start/end trace with nothing inbetween
Entry exit should be recorded for things that have multiple traced steps. If there is nothing in between start/end, it shouldn’t be added to the story *unless* you are doing performance. If you are recording enter/exit, you should also record the amount of time. You should use a Dispose patter to ensure that the End is written.

Thou shall write a unit/integration test that is has been tuned for a good trace story.
The trace story should be shorter than a novel, longer than a flippant comment.

Thou shall not write a Error trace unless we know it will also be error logged via Elmah or the like
Trace is not error logging. The maintenance developer is obliged to look at the error log, Trace is only on occasionally and even after tuning could have too much info.

Thou shall educate the maintenance developer on how to use the existing trace
The .NET framework has a couple of trace sources. To get at them, you have to just know that they exist. There isn’t an easy way to query an assembly and ask it what trace sources are there and what switches it takes to activate them.

Thou shall look for opportunities to replace comments with trace
We don’t want code to become less readable because of trace. So apply the same reasoning about deciding when to comment to when to log (don’t log the obvious, like most variable assignments)

Thou shall not focus on domain specific events
These would be things like “John editing record B”, or “Sold book to customer Q”.

Thou shall use trace as sort of a poor mans Method-was-called Assertion
For example, if you are caching an expensive value, then on first call, there should be a trace message from the CreateExpensiveValue method and on the second go round there shouldn’t be any trace message. But unlike a unit test, the assertion is verified by a developer reading code. This shouldn’t be a substitute for using mockign frameworks.

Thou shall not bother with Warn. Just use Error and throw an Exception.
Warnings need to have an audience and trace doesn’t always have an audience. Exceptions have an audience. And when an exception is thrown, we may want to add that to the story, since trace and error logs aren’t necessarily going to be together.

Thou shall not bother with Verbose. Just use Info
Lets say I write a trace and I call it information. Years later it is in a tight loop that is executed 10 times a millisecond. You can’t control or predict in advance if a given message is info or verbose.

Thou shall see the link between Trace and step through
Ever step through code that kept going throw a 3rd class and you though, I wish this would stop stepping through that class? You could add attributes (and remember to remove them later) or you could switch to a trace strategy that allows you to turn off trace for the likely-just-fine class.

Rude and Passive Aggressive SQL Error Messages

First off, some people are just “rude-deaf” It doesn’t matter what rude language or actions one complains about and they say, “No that wasn’t rude, the error message only said ‘Fuck you and your grandmother’”

Second, you don’t have to have an error message that says “Fuck you and your grandmother” to be rude. Most error messages crimes are being unhelpful and passive aggressive (i.e. hostility through doing nothing, like just watching someone on crutch struggling to open a door)

Incorrect syntax near ‘%.*ls’.
This message typically is filled in with something like , or ‘ A typical query is choc full of commas. Only a passive aggressive human would tell someone, “There is a spelling mistake in your term paper near one of the spaces” SQL error messages tend to identify the location of a syntax error by a chunky measure, maybe the statement, so the error could be anywhere inside a 1000 line SQL statement. And if the syntax error could provide the previous 100 and succeeding 100 non-blank characters with a pointer at where SQL first realized something went wrong, that would be helpful.

Warning: Fatal error %d occurred at %S_DATE. Note the error and time, and contact your system administrator.
First off, the people who read this either are the administrator, or there isn’t an administrator. You might as well swap in some other imaginary figure, like god. “Fatal error. Pray to your gods, fucktard”

The type ‘%.*ls’ already exists, or you do not have permission to create it.
Oh SQL, you know why this failed. Surely there isn’t a single method called ExistsOrLacksPermissions and the implementers just can’t decide why that exception was thrown. I think this error is rude or fucked up. You decide.

Finally, be helpful.
Is it really so hard to write a suggested fix? “Permission denied, execute “GRANT” command to grant permissions”

Google exists, who ever is working on MS-SQL ought to google all their own messages and just put a sentence worth of the internet’s collective advice into their error message.

ClickOnce Experiments

Okay, I used to think click once was a sandbox, kind of like Java applets. I used to think that all click once applications installed from an internet link would be put in a sandbox with partial trust so that certain .NET API and unmanaged code couldn’t be executed.

I was wrong! At least, according to my experiments today with .NET 4.0 and click once.

I took the cassini source code and wrote it so that it would launch and then set up a virtual directory for a website that I bundled with it, essentially as “resource”/”content” files. I figure out how to get that to work in .NET and in ClickOnce. I thought, gee, I thought the APIs necessary to load an AppDomain and host an ASP.NET site and serve files on port 80 would be forbidden by the sandbox, right? Initially I thought it was because I was installing it locally. So I put the files up on a website, downloaded and installed and it still let cassini run as in the ClickOnce local storage area, and serve up a website in Full Trust.

Well, the sandbox is opt-in. If a software publisher doesn’t opt in, the user just gets a warning that doesn’t really make any sense and the application runs in Full Trust.

I did check to see how cassinni runs in ClickOnce after opting-in to Internet level trust. Now, the click once version of Cassini fails as soon as it tries to find out the path to it’s own assembly files. I still got… well not so scary as unintelligible warnings about needing to “trust” the remote website.

Well, so much for sandboxing. Now one thing worth nothing is I only get the browsers warning “Hey this came from the internet, sure you want to run it?” I don’t get teh UAC curtain of “this application will change your machine”. I do get the unintelligible Click once, message “Unknown publisher, this app has access to your machine, start menu, and well, it came from the internet” I’m imaging grandma reading that and thinking, “Well, I don’t personally know them either and I’ve already been told this is from the internet” Where else does software come from? The machine I’m writing from doesn’t event have a CD drive.

So a malicious code writer would distribute code and not opt in to sandboxing, in full expectation that some people will click through the messages.

A non-malicious code writer would only get benefit from this if he opts into sandboxing, didn’t need those other APIs, and if a malicious code writer tried to sneak an assembly into the non-malicious application and execute it, maybe if the sandboxed app has a plug in feature. Why bother with malicious plugins when you can just get people to run your separate full trust app? And besides, to run a plug in .NET you need to be able to load assemblies on demand and I bet a medium or low trust application wouldn’t be able to do that.

Custom Exception Antipatterns

try{ } catch(SomeException)
{ throw MyCustomException(“Error in middle tier, method foobar()”);}

The above is wrong for multiple reasons. The text only tell you some stack trace information. Unless you know for sure that your error logging infrastructure or error reporting discards the stack trace and you can’t fix it, then don’t put stack trace information in your error. The default yellow screen and Elmah both capture stack trace.

The next reason the above is wrong, it overwrote the error. The error used to be something specific, e.g. SecurityException because you had the wrong NTFS permissions, for FormatingException, because you had two decimal points. But now the error is overwritten with “something bad happened”

try{ } catch(SomeException e)
{ throw MyCustomException(“You need to run batch file foo.bat”,e);}

The above pattern wraps the exception. But error loggers don’t always display the internal errors, especially if there are multiple internal errors. Don’t wrap errors unless they are providing something remarkably valuable or unless you are *actually* writing code to specifically trap this sort of error on the next tier. (planning or thinking you might in five years doesn’t count)

In ASP.NET don’t catch unless you feel pain from not catching.

In winforms, you have to catch, else the application exits. But in ASP.NET, only the page request ends. From the user’s standpoint, the application is still running because the next request can still succeed.