So you want to run powershell scripts without admin rights

First logon as a limited, non admin user on windows 7. Should be easy because large organizations are yanking admin rights and apps are running better without admin rights, so whining to the help desk isn’t as effective as it was.

Create an empty file, say test.ps1

Try to run it using


You can’t. Execution of scripts has been disabled. (You try modules and profile scripts, same issue) So you read up and try running

set-executionpolicy remotesigned

And you get an error message about not being able to modify the registry because you are not admin on your machine you are a limited user. And then you think to your self…There’s a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part! You can’t even passively take part! And you’ve got to put your bodies upon the gears and upon the wheels…upon the levers, upon all the apparatus, and you’ve got to make it stop! And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!

And with a rebel cry we jump to google. A non-admin can still run a batch file as a limited user. So can we execute .ps1 equivallent as a .bat?

Yes! Our friends from the CCCP know how.

Wrap everything in your .ps1 file in a

$code={ #code goes here }

Encode it


Put it in a batch file like so, names test.bat

powershell.exe -NoExit -EncodedCommand

The code executes and is the moral equivallent of executing a .ps1 file. Except you have no clue what the source is by casual inspection. And it means all non-admin users have to run their ps1 code through a build step.

Jeffrey Snover, tear down this wall! Thanks.

Reading System.Diagnostics from the Mono sources

I’ve been trying to use System.Diagnostics for a while. I think I see why it failed to catch on. It thinks that developers will write a lot of code for a tertiary customer– the production server admin staff. Why do I think this? A good third of the code and complexity of the namespace is related to XML configuration. A production server admin can’t recompile the code, but sometimes in some organizations they can change configuration, say a .config, .ini or registry setting. And through these means they could turn trace on and off. But that is only if the original developers wrote a lot of trace that uses a library that can be turned on and off. System.Net and System.ServiceModel both use System.Diagnostics trace. Most other framework namespaces do not– you can use Reflector or the like to search the .NET API for instances of TraceSource– you find out that there are not a lot. People were using Console.WriteLine, Respone.Write and every other technique they learned in their first Hello World program.

Making Systems.Diagnostics Palatable to Developers
* Trace needs to be production safe. Not just for threading but for performance. Write should take a lambda function instead of a string. The listeners shouldn’t have a slow default that writes to a hard to see invisible listener (the DebugString API)
* Trace should work well in environments where a real database and possibly the filesystem isn’t available. ASP.NET makes it too hard to write to console because you have to attach the console to the WebDev Server using Win32 API calls, there isn’t a built in Application[], Cache or Session listener, nor is there an OleDb, MS-Acess, or Excel listener.
* Trace should allow for all components to be customized, Listeners, Sources, Switches, Filters, and Output Formating. The last, formatting, is barely developed in the Systems.Diagnostics API. Switches and Sources have to be completely wrapped to effectively change their behavior. And you can only have 1 Filter per listener and 1 switch per source– big restriction. Another annoyance is that if you do want to extend the API, then currently you have to stick within the constriants of legacy config– so you can’t implement multiple switches per source without abandoning the legacy xml config and writing a whole new xml config section handler.
* Trace should have a fluent API. I want to be able to write in a fluent API the configuration scenarios and then use an admin page to turn these scenarios on and off. Some typical scenarios — show me the app trace, show me the sql trace, show me the data trace, show perf trace, show everything, show only the current user, show all users, show me the next 10 minutes, write it to Session and then email it to me. When I have those, I have an incentive to write trace, and then when the code goes to production, the production admin will have a chance of diagnosing what is going on.

Compiler Error 128

Many things cause compiler error 128. Ref, here.

Sometimes reregistering aspnet with iis works.

In my case, I had attached a console to a running app. Then I uploaded the correct release build over the top of that (the release build doesn’t attach a console) and then I got compiler error 128. It cleared up on iisreset. If in doubt, pull the power out.

Getting WCF to talk ordinary HTTP to a browser

This is an exercise in driving nails into the coffee table with your shoe. The goal isn’t really all that obviously beneficial and the tool isn’t the expected tool for the job. WCF wants to speak SOAP to SOAP aware clients. With the expansion to support a REST API with System.ServiceModel.Web, you can get a WCF service to talk to a browser. HOWEVER

* The browser doesn’t serialize complex objects to a C# like data type system on Request or Response. Instead you deal primarily in a raw Stream.
* Some browsers don’t speak XHTML (they will render it if you call it text/html, but MSIE will render xhtml/application as XML), so you can’t just return an X(HT)ML payload.
* WCF used this way is a “bring your own view engine” framework. I chose SharpDom for this exercise. It seems like it should be possible to support returning a SharpDom return value that serializes to XHTML with a type of text/html, but I don’t know how to do that.
* MVC already solves a lot of similar problems.

BUT with WCF you get some of those WCF features, like umm, well, when you have a browser client a lot of features aren’t avail (e.g. fancy transaction support, callbacks, etc), but you can still do fancy thinks like instancing, and supporting a HTML browser, JSON and WCF interface all on top of mostly the same code.

Just serving a page is fairly easy. Turn on web support in the config (same as any REST enabling, see end of post),

public Stream HomePage(){ 
            //Return a stream with HTML
            //... I have skipped the view engine, I used SharpDom
            MemoryStream stream = new MemoryStream();
            TextWriter writer = new StreamWriter(stream, Encoding.UTF8);
            new PageBuilder().Render(model, writer);
            stream.Position = 0;
            return stream;

What will the URL look like? Well in devepment in Win 7, if you don’t have admin rights, it will be something like:


The http://localhost:8732/Design_Time_Addresses/ is the address that a non-admin can register. It looks like you can’t register 8080.

The /web/ part is because in my endpoints in config (below), the endpoint is “web”

Also notice you have to set an encoding (and I suppose you’ll want that to match what the HTML meta tag says)

[WebInvoke(Method = "POST")]
public Stream AnotherPostBack(Stream streamOfData)
StreamReader reader = new StreamReader(streamOfData);
String res = reader.ReadToEnd();
NameValueCollection coll = HttpUtility.ParseQueryString(res);
//Return a stream of HTML

To invoke the above, use an METHOD of POST and an action of


And finally, use a web friendly host in your console app

using (WebServiceHost host = new WebServiceHost(typeof(HelloService)))

Also, you can post back to this kind of operator… but for the life of me I can’t figure out how to get the Content. I can see the headers, I can see the content length, but I can’t get at the stream that holds the post’s content.

(This StackOverflow Q & A implies that to get the raw content, you have to use reflection to inspect private variables: )

[OperationContract(Action = "POST", ReplyAction = "*")]
[WebInvoke(Method = "POST")]
public Stream PostBack(Message request)

Obviously, cookies and URL params are just a matter of inspecting the IncomingRequest.

And the config:

      <service name="WcfForHtml.HelloService" behaviorConfiguration="TestServiceBehavior">
            <add baseAddress="http://localhost:8732/Design_Time_Addresses/HelloWorld"/>
        <endpoint address="web"
        <!--SERVICE behavior-->
          <behavior name="TestServiceBehavior">
            <serviceMetadata httpGetEnabled="true" />
            <serviceDebug includeExceptionDetailInFaults="true"/>
        <!--END POINT behavior-->
          <behavior name="webBehavior">

Posted in wcf

Production Trace

Assume you work in a large organization, you write code, you really would like to see some diagnostic trace from your app in Test, Staging or Production, but a server admin owns all of them. You can’t have the event logs, remote desktop access, or ask that the web.config be edited to add or a remove a System.Diagnostics section. Just imagine.

Production trace needs to be:
- high performing, if it slows down the app, which may already be under load, not good.
- secure, since trace exposes internals, it should have some authorization restrictions
- not require change of code or config files, because large organizations often have paralyzing change management processes
- support a variety of listeners that will meet the requirements above (and if those listeners are write only, then a reader will need to be written)

System.Diagnostics -File-
- Perf- Not very performant, will likely have contention for the file.

System.Diagnostics-Console, DebugString, Windows Event Log
- You can’t see it. End of story.

ASP.NET Trace.axd and In Page
- Perf- not so good.
- Security- it’s well known, so security teams often disable it
- Config- Can sort of enable on a by page/by user basis if you use a master page or base page to check rights and maybe a query string.

Custom Session Listener
Write to session.
- Okay for single user trace. Need a page to dump the results.
- Perf probably okay, could hog memory.
- Security, pretty good, by default you can only see your own stuff.

Custom Cache Listener
- Write trace to cache
- Will have locking problems
- Won’t hog memory because of cache eviction
- Cache eviction could remove trace too fast.

HttpContext.Items listener +Base page to dump contents at end of request
- Only shows one page of trace at a time
- Probably high perf.
- Won’t show other users

Cross Database Support

ADO.NET tried really hard to solve the cross database support problem. And the 2.0 version (or so), with System.Data.Common namespace does a pretty good job. But when I tried to support SQL and MS-Access, here is what I ran into:

Connection string management is a pain. If you are configuring an app to support MS-SQL and MS-Access (for a library app, in my case a hit counter), you need up to 6 connection strings:
1) Oledb Access – Because this is the old Cross-DB API of choice
2) ODBC Access – Because Oledb is deprecated & the new cross-DB API of choice
3) SQL Oledb – Same template, different provider
4) Native SQL – Some things have to be done natively, such as bulk import.

I need something more than a connection string builder, I need a connection string converter. Once I have the SQL native version, I should get the OleDB version and the ODBC version for free.

Next– ADO.NET doesn’t make any effort to convert the SQL text to and from one dialect to another, not even for parameters. So I write this code.

Cross DB When You Can, Native When you Have To
Some application features just require really fast inserts. For MS-SQL that means bulk copy. For MS-Access, that means single statement batches and a carefully chosen connection string. The System.Data.Common namespace lets you use factories that return either OleDB or native, but once they are created it is one or the other. What I wish there was, was a systematic way of the code checking for a feature and if it has it, use it, if it doesn’t fall back. Obviously this sort feature testing could be a real paint to write for some features, but for things like, say, stored procedures, why wouldn’t it be hard to check for stored proc support and when it exists, create a temp or perm stored proc to execute a command instead of just raw sql? I haven’t really figured out a way to implement this feature.

Are you Really Cross DB Compatible?
Of course I am. After every compile, I stop and test against all 14 database providers & configurations. Yeah right. If the application isn’t writing to the DB right now, I’m not testing it. So after I got MS-Access working, I got SQL working. MS-Access support broke. Then I got MS-Access going again. Then they both worked. Then I added a new feature with MS-SQL as the dev target. Then MS-Access broke. And so on.

ADO.NET executes one command against one database. What I need to prove that I have cross DB support is “mulit-cast”. Each command needs to be executed against two or more different databases to prove that the code works with all providers. And this creates a possible interesting feature of data-tier mirroring, a feature that usually requires a DBA to carefully set it up and depends on a provider’s specific characteristics. With multicast, you can do a heterogeneous mirror– write to a really fast but unreliable datastore and also write to a really slow but reliable datastore.

I plan to implement multi-cast next.

Sites I am migrating

I created them in spare time and they add up after a while:

.NET Efforts – .NET – A static mirror I’m hosting for jan Pije – a front end to a word generation tool – Helps generate linguistic interlinear gloss formatting – A .NET wiki that has some stale info about how to be a locavore in the DC area.

PHP Efforts – A directory of language resources in DC. – Not sure what to do with this. It is a wiki right now. – a content site that is essentially a blog post about using twitter for foreign language learning – a landing page I used for a google ads campaign for my icelandic meetup.

And that is about it.

Customizations I used with Elmah

Elmah isn’t especially secure if assume the error log itself has already been breached. Even if it hasn’t been breeched, sometimes Elmah logs things that the administrator doesn’t want to know, like other people’s passwords.

There are some reliability issues too.

1) Don’t log sensitive data.
- Some data is well known, e.g. HTML headers
- Some data is not well known, textboxes were you enter your password
- Viewstate for the above
2) Don’t refer to DLLs that won’t exist, for fear that dynamic compilation will fail due to a reference that can’t be found. For example the sqlite. I understand why the main project is set up this way though– the goal was to minimize the number assemblies distributed and still support lots of databases. This could also be a non-issue. Assembly resolution, for me, has always been black magic.
3) Override Email to use Apps config, insted of Elmahs config sections in the ErrorMailModule. I don’t like doubled config settings, where my app has a setting and so does the component.
4) Use Apps role system and PrincipalPermission to restrict display to certain roles
- Add PrinciplalPermissions to all classes that view things (but not classes that log things), see end for a list. If you don’t trust your server admins to keep from messing up the web.config, you can put the role checks right into the code: This set worked for me.
5) Stengthen XSS protections.
Change Mask. and HttpUtility.HtmlEncode to AntiXss.HtmlEncode. This creates a dependency on either the AnitXss library or .NET 4.0.
6) Add CDATA to javascript blocks
7) Switch to READ UNCOMMITTED. The error log must not cause errors (i.e. deadlocking)
8) When error log gets really large, it has to be rolled over and truncated to prevent locking issues. This at least was a problem in SQL 2000 and I think SQL 2005.

List of classes that could use a security attribute, should you choose such a strategy.

AboutPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDetailPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDigestRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorHtmlPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorJsonHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogDownloadHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorXmlHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]

Opinionated Trace

The problem with trace-debug-loggers (I’ll call it trace from now on) is that anything goes. I’m going to try to use the System.Diagnostics namespace with the extensions on codeplex.

Trace is a story, it is the moment to moment diary of the application. The audience of trace is a developer. They want to know what the application is doing. Computers do a lot, so trace volume will have to be carefully managed.

Thou shall not trace in production, unless you have to.
Trace can be expensive. I took a representative 1000 repetition integration test that ran in 1/2 second and turned on verbose logging with DebugView running, and it took 17 seconds. This is why there should be some thought put into logging levels and why there should be multiple trace sources, so that most of them can be off most of the time.

Thou shall be very wary of logging to the same transactional database as the application.
Jeff had a bad experience with this kind of logging and decided to throw the baby & bath water out. I think they just needed to rethink what trace’s promise and possibilities really are.

Thou shall use a TraceSource per class.
There should be a trace source per class. Typically we’re debugging a few classes at a time and turning off the other trace by commenting it out isn’t a practical solution.

Thou shall not use System.Diagnostics.Trace or System.Diagnostics.Debug
Use TraceSource instead. You can’t turn off these sources as easily as a TraceSource

Thou shall not reinvent System.Diagnostics. Extend it. Resist using other people’s re-inventions. Do use other people’s extensions
Trace is for maintenance developers. A maintenance developer shows up on the scene and the last thing they want to see is yet another custom solution for a solved problem. How excited would you be to find a code base that shunned System.IO’s file system API and used an entirely custom one? Your app has a bug. You have one problem. You find out all the trace is written using a odd ball trace infrastructure. Now you have two problems.

Thou shall not do start/end trace with nothing inbetween
Entry exit should be recorded for things that have multiple traced steps. If there is nothing in between start/end, it shouldn’t be added to the story *unless* you are doing performance. If you are recording enter/exit, you should also record the amount of time. You should use a Dispose patter to ensure that the End is written.

Thou shall write a unit/integration test that is has been tuned for a good trace story.
The trace story should be shorter than a novel, longer than a flippant comment.

Thou shall not write a Error trace unless we know it will also be error logged via Elmah or the like
Trace is not error logging. The maintenance developer is obliged to look at the error log, Trace is only on occasionally and even after tuning could have too much info.

Thou shall educate the maintenance developer on how to use the existing trace
The .NET framework has a couple of trace sources. To get at them, you have to just know that they exist. There isn’t an easy way to query an assembly and ask it what trace sources are there and what switches it takes to activate them.

Thou shall look for opportunities to replace comments with trace
We don’t want code to become less readable because of trace. So apply the same reasoning about deciding when to comment to when to log (don’t log the obvious, like most variable assignments)

Thou shall not focus on domain specific events
These would be things like “John editing record B”, or “Sold book to customer Q”.

Thou shall use trace as sort of a poor mans Method-was-called Assertion
For example, if you are caching an expensive value, then on first call, there should be a trace message from the CreateExpensiveValue method and on the second go round there shouldn’t be any trace message. But unlike a unit test, the assertion is verified by a developer reading code. This shouldn’t be a substitute for using mockign frameworks.

Thou shall not bother with Warn. Just use Error and throw an Exception.
Warnings need to have an audience and trace doesn’t always have an audience. Exceptions have an audience. And when an exception is thrown, we may want to add that to the story, since trace and error logs aren’t necessarily going to be together.

Thou shall not bother with Verbose. Just use Info
Lets say I write a trace and I call it information. Years later it is in a tight loop that is executed 10 times a millisecond. You can’t control or predict in advance if a given message is info or verbose.

Thou shall see the link between Trace and step through
Ever step through code that kept going throw a 3rd class and you though, I wish this would stop stepping through that class? You could add attributes (and remember to remove them later) or you could switch to a trace strategy that allows you to turn off trace for the likely-just-fine class.

Rude and Passive Aggressive SQL Error Messages

First off, some people are just “rude-deaf” It doesn’t matter what rude language or actions one complains about and they say, “No that wasn’t rude, the error message only said ‘Fuck you and your grandmother’”

Second, you don’t have to have an error message that says “Fuck you and your grandmother” to be rude. Most error messages crimes are being unhelpful and passive aggressive (i.e. hostility through doing nothing, like just watching someone on crutch struggling to open a door)

Incorrect syntax near ‘%.*ls’.
This message typically is filled in with something like , or ‘ A typical query is choc full of commas. Only a passive aggressive human would tell someone, “There is a spelling mistake in your term paper near one of the spaces” SQL error messages tend to identify the location of a syntax error by a chunky measure, maybe the statement, so the error could be anywhere inside a 1000 line SQL statement. And if the syntax error could provide the previous 100 and succeeding 100 non-blank characters with a pointer at where SQL first realized something went wrong, that would be helpful.

Warning: Fatal error %d occurred at %S_DATE. Note the error and time, and contact your system administrator.
First off, the people who read this either are the administrator, or there isn’t an administrator. You might as well swap in some other imaginary figure, like god. “Fatal error. Pray to your gods, fucktard”

The type ‘%.*ls’ already exists, or you do not have permission to create it.
Oh SQL, you know why this failed. Surely there isn’t a single method called ExistsOrLacksPermissions and the implementers just can’t decide why that exception was thrown. I think this error is rude or fucked up. You decide.

Finally, be helpful.
Is it really so hard to write a suggested fix? “Permission denied, execute “GRANT” command to grant permissions”

Google exists, who ever is working on MS-SQL ought to google all their own messages and just put a sentence worth of the internet’s collective advice into their error message.