Skip to content


Launching WinForms apps from a console

This is harder than it seemed like it should be.

I wanted to have a console app with trace. I like tracing to a console. But if I’m writing a console app, the trace is going to interleave with the UI. So I thought, I’ll create a win form class and then Applicatin.Run() it, then send text to a TextBox. And I’ll get pretty trace in a window other than the main Console.

So I did that, and the application blocks on Application.Run(), meaning the form responded to clicks and key presses, but the console freezes and executes no code.

So I learned to do form.Show() plus Application.DoEvents(), which shows the form, allows the console code to run, and then lets the form UI update.

But things still seemed blocked… and they were. This time because Console.ReadKey() was blocking. So I change Console.ReadKey() to a while-Thread.Sleep(250) and I could then move both windows, and send output to both windows via my TraceListener and ordinary console commands.

Console is single threaded, so is is easy to get blocked, which blocks any Forms it might have spawned. Also, Forms really doesn’t like to be manipulated from a thread different from the thread that created the Form. So some kinds of cross window communication blew up.

So that was my experiment. I guess the alternative would be to write to a WPF window, which could potentially make very pretty trace, but I don’t know if it would be worth my time to learn WPF.

Posted in Trace.


Javascript Books I’ve read

I am ASP.NET/C# developer and all of a sudden I needed to write a mostly client-side line of business web-app. So I started reading. This has taken up much of my train time for the last half year.

Books I recommend.
Professional JavaScript for Web Developers, Zakas. Required reading, beginner to intermediate. Some basics are covered, but the book is also encyclopedic, so some content is stuff that an intermediate or advanced dev would care about, such as the less common APIs.

Javascript Patterns. Beginner to intermediate. Great book. I’ve found that a lot of concept don’t sink in from just one reading from just one author. This book is a good complement to Zakas.

Eloquent Javascript- Fantastic book for intermediate dev. Some of the later chapters dragged because I couldn’t care about the game. But sometimes a good sample app is a good thing. Five of Five stars. (Also, this book had the best intro to functional programming without feeling like a math textbook.

Effective Javascript. Intermediate to advanced. Covers all the less common challenges you run into writing JavaScript. Required reading.

High Performance JavasScript AND Even Faster Websites. Intermediate. Team books– I now confuse them in my memory, chapters of various quality, some advice is not so general (well what if you aren’t doing many pictures)

Functional Javascript. Mostly advanced, some intermediate. I liked 1/3 of this– that is about 1/3 of each chapter. The other 2/3 were written for someone much smarter than me, possibly some one so smart that if they were that smart, they wouldn’t need to read a book about functional programming, they’d just do it spontaneously. Sample code was overly compressed, often read like algebra proofs with too many “easy” and “obvious” steps skipped over.

Effective REST Services with .NET. Has some Javascript, but the focus is on RESTy things.

The Art of Readable code. Not JS specific, but it was a good book.

Books I don’t recommend
Javascript & jQuery the missing manual. Beginner. Like you don’t know how to program in any language yet beginner. I bought this by mistake. But I would use this to teach my kids to program.

JavaScript Web Applications, MacCaw. Mostly advanced, some intermediate. This is about Spine actually. I wish it had that in the title. This was written for someone smart enough to write their own MVC framework.

(Where is Definitive Guide and the Good Parts?)
I use jslint, which imho, is a substitute for actually reading the Good Parts. Definitive Guide, last time I read it was like reading machine generated javadoc. I don’t know if that is still fair, but it kept me from buying an updated version.

Pluralsight Videos I’ve watched
I never know what sort of input will make my brain understand something in IT. Will it be hands on development? A book? A screencast? An audio podcast? So I try them all. If I sound lukewarm about some of these (the Knockout and Underscore videos), its because these libraries aren’t going to sink in for me until I work with them. But if I didn’t watch the video, I probably would have a painful time of getting started in the first place. And I find watching training videos to be work. This isn’t Game of Thrones.

JsRender by John Papa. Watch it. After it, you will understand what clientside templating is about. I don’t care if JsRender wins market or mindshare, it sounds like the templating technologies are all similar, so learn one, and you have a vague idea of how they all work.

Structuring JavaScript Code. Not bad. The focus is on “class”-ical programming. BUT, these are the easiest to grok ways to make your code modular.

Underscore Fundamentals. I wanted to benefit more from this than I did. I don’t know if there is a way to absorb the underscore library except through constant attempts to use it. I recommend taking breaks (watch it over a period of days) with coding time in between, to increase the odds of all those methods sinking in.

Knockout fundamentals. I think I get why these client side databinding engines exist now.

Podcasts
I listen to Hanselminutes, Dot net Rocks and Herding Code. I listen to these to learn that a technology exists, and who and what it is for. I learned about unit testing, build servers, and the like from podcasts. Despite listening to lots of JavaScript shows, none of them prepared me as well as the books and screen casts did.

Posted in Javascript.


Javascript Intellisense, Pretending to Compile JS

So Visual Studio 2010 is reporting everything has the same methods, the methods of the JavaScript Object.

Getting Intellisense to work
1) Maybe the Telerik ScriptManager stomped it. The ScriptManager has to be an Asp:ScriptManager and nothing else. (As of VS2010)
2) Maybe VS wants a Ctrl-Shift-J (manually force a JS intellisense update)
3) Maybe there is a “compile” error.
– Check the Error List tab and look at the yellow warnings.
– Check the General output window, especially after doing a Ctrl-Shift-J
– Try to reformat the code. If it doesn’t reformat, Visual Studio probably can’t “compile” and doesn’t know what to do
– Look for green squiggles
4) Maybe the JS wants to be in its own file. I’ve seen broken intellisense start working after the code was moved from an aspx to a .js file
5) Maybe the the annotations (the fake reference at the top of a JS page) are in the wrong order. For example, if you are working with Telerik, the Ajax reference should be first, Telerik stuff next, your own code later and it should be in order of dependency.
6) Maybe you haven’t added enough annotations (especially the fake “references”, but also summary, param, returns annotations)
7) Maybe you used a golden nugget i.e. <%= Foo() %> and put it in a JS block (on an ascx or aspx page of course.) The VS Javascript parser see this as JS and tries to treat it as malformed JS. When you can cheaply quote it, quote it.
- e.g. var foo =”<%= Foo() %>“; // Just a string.
- e.g. var bar =parseInt(“<%= Bar() %>“,10); //Convert to int
- e.g. var bar =”<%= TrueOrFalse().ToLower() %>“===”true”; //Convert to bool
- but maybe/maybe not e.g. eval(“<%= GenerateJS() %>“); //This isn’t a nice solution because you are doing an unnecessary, expensive, logic changing eval just to keep intellisense from breaking.
8) Watch this space, I still haven’t gotten page method intellisense to show up reliably.
9) ScriptMode appears to affect intellisense. ScriptMode=”DEBUG” has better intellisense, but literally 1000x worse performance for browser execution, especially on IE.

And an mistake to avoid especially for ASP.NET developers
<%= Foo() %> syntax does not work in a .js file. .js files are static and not processed by the ASP.NET templating engine.
JS values written to the screen are initial values. Once they are written, they might as well be static. The JS is code generated on the server, but executed on the client.
var now = ‘<%= DateTime.Now.ToString() %>‘ ; // This isn’t going to change.
If you call page methods, they return immediately, the call back happens a few seconds later.
If you page methods blow up, Global Asax will not get an error event, so you have to use try/catch in your Page Method.
If a page method blows up, it may start erroneously reporting “Authentication Failed” errors. I think this is some version of a WCF style logic, where a client can go into a “faulted” state and just refuse to behave there-after. Still a theory.
On a single page application (SPA), var === Session. In a multi-page ASP.NET application, you constantly store state in Session because values don’t live past the life of a page. In a single page application, your user doesn’t change pages. So a page variable is Session. It never times out.
All parameters of your page methods are user input. In server side programming, you might grab a value from the database, store it in Session and use it later to save a record. In the SPA scenario, that value is handed over to the user and they can change it before it is submitted back to the page method. The level of difficulty is not especially high. So as values pass from server to JS page and back, they will have to be re-validated. Even if you try to keep the values on the server alone, eventually the user will be given a choice of values, and on the page method, you’d want to validate that these values were on the list.

Posted in Javascript, Visual Studio.


HTML 5 Web Storage

Web development is the constant struggle to manage state, state that is constantly disappearing because, HTTP is stateless. We are all now experts in using hidden fields (viewstate), cookies, query strings, and server side session. Now we have one more option, HTML5 web storage:

With shims, anyone can use it now on all browsers: http://www.jstorage.info/

Security-wise, it not especially secure. You can’t store secret data here, it is public to the user and any malicious code on the machine. To safely encrypt, you have to encrypt on server and send back to the server to decrypt. This save the cost of sending the data in a cookie for every single request, but the client can’t manipulate it.

You have to make sure you don’t share your domain with other applications. So if your shared hosting also shares the same domain, then all apps share the same local storage.

The data in local storage can be tampered with, so it is the equivalent of user input. Which gave me this idea:

Never ask the user anything twice.
Wouldn’t it be interesting to have everything the user told you stored for recall? Store the users last 100 searches. So you’ve asked the user for their address. Store it locally and re-use that instead of round tripping to the server. What this seems to address most closely is the sort of problems that ASP.NET Profile addresses. Profile is sort of a bad name– it is a durable, strongly typed session. It was supposed to be a place to store things like, the user’s preferred font size, preferred language and other UI settings. Since they are irrelevant to the app’s domain (say selling books), the data can be stored somewhere where it is unlinked to anything else.

And the last scenario is going to be organization specific– in some development teams, get a new table is major hurdle. So you begin to look for every trick to avoid having to write to the database- from memory stored data to file stored data to local web storage. So lets say your user needs a data snapshot– data will be stored locally, processed locally but not sent back to the server (on account of tamper risks). Instead of creating a snapshot table, and going through a lengthy dev cycle to get those tables and procs created, we can use Web storage.

Anyhow, just an idea. I haven’t even written any sample code.

Posted in Matthew Martin.

Tagged with .


So you want to run powershell scripts without admin rights

First logon as a limited, non admin user on windows 7. Should be easy because large organizations are yanking admin rights and apps are running better without admin rights, so whining to the help desk isn’t as effective as it was.

Create an empty file, say test.ps1

Try to run it using

.\test.ps1

You can’t. Execution of scripts has been disabled. (You try modules and profile scripts, same issue) So you read up and try running

set-executionpolicy remotesigned

And you get an error message about not being able to modify the registry because you are not admin on your machine you are a limited user. And then you think to your self…There’s a time when the operation of the machine becomes so odious, makes you so sick at heart, that you can’t take part! You can’t even passively take part! And you’ve got to put your bodies upon the gears and upon the wheels…upon the levers, upon all the apparatus, and you’ve got to make it stop! And you’ve got to indicate to the people who run it, to the people who own it, that unless you’re free, the machine will be prevented from working at all!

And with a rebel cry we jump to google. A non-admin can still run a batch file as a limited user. So can we execute .ps1 equivallent as a .bat?

Yes! Our friends from the CCCP know how.

Wrap everything in your .ps1 file in a

$code={ #code goes here }

Encode it

[convert]::ToBase64String([Text.Encoding]::Unicode.GetBytes($code))

Put it in a batch file like so, names test.bat

powershell.exe -NoExit -EncodedCommand
DQAKAAkAIwBpAHQAZQByAGEAdABlACAAbgB1
AG0AYgBlAHIAcwAgADEAIAB0AGgAcgBvAHUAZwBoACAAMQ
AwAA0ACgAJADEALgAuADEAMAAgAHwAIABmAG8AcgBlAGEA
YwBoAC0AbwBiAGoAZQBjAHQAIAB7AA0ACgAJ
ACMAIABqAHUAcwB0ACAAbwB1AHQAcAB1AHQAIAB0AGgAZQB
tAA0ACgAJACIAQwB1AHIAcgBlAG4AdAAgAG
8AdQB0AHAAdQB0ADoAIgANAAoACQAkAF8ADQAKAAkAfQANAAoA

The code executes and is the moral equivallent of executing a .ps1 file. Except you have no clue what the source is by casual inspection. And it means all non-admin users have to run their ps1 code through a build step.

Jeffrey Snover, tear down this wall! Thanks.

Posted in powershell.


Reading System.Diagnostics from the Mono sources

I’ve been trying to use System.Diagnostics for a while. I think I see why it failed to catch on. It thinks that developers will write a lot of code for a tertiary customer– the production server admin staff. Why do I think this? A good third of the code and complexity of the namespace is related to XML configuration. A production server admin can’t recompile the code, but sometimes in some organizations they can change configuration, say a .config, .ini or registry setting. And through these means they could turn trace on and off. But that is only if the original developers wrote a lot of trace that uses a library that can be turned on and off. System.Net and System.ServiceModel both use System.Diagnostics trace. Most other framework namespaces do not– you can use Reflector or the like to search the .NET API for instances of TraceSource– you find out that there are not a lot. People were using Console.WriteLine, Respone.Write and every other technique they learned in their first Hello World program.

Making Systems.Diagnostics Palatable to Developers
* Trace needs to be production safe. Not just for threading but for performance. Write should take a lambda function instead of a string. The listeners shouldn’t have a slow default that writes to a hard to see invisible listener (the DebugString API)
* Trace should work well in environments where a real database and possibly the filesystem isn’t available. ASP.NET makes it too hard to write to console because you have to attach the console to the WebDev Server using Win32 API calls, there isn’t a built in Application[], Cache or Session listener, nor is there an OleDb, MS-Acess, or Excel listener.
* Trace should allow for all components to be customized, Listeners, Sources, Switches, Filters, and Output Formating. The last, formatting, is barely developed in the Systems.Diagnostics API. Switches and Sources have to be completely wrapped to effectively change their behavior. And you can only have 1 Filter per listener and 1 switch per source– big restriction. Another annoyance is that if you do want to extend the API, then currently you have to stick within the constriants of legacy config– so you can’t implement multiple switches per source without abandoning the legacy xml config and writing a whole new xml config section handler.
* Trace should have a fluent API. I want to be able to write in a fluent API the configuration scenarios and then use an admin page to turn these scenarios on and off. Some typical scenarios — show me the app trace, show me the sql trace, show me the data trace, show perf trace, show everything, show only the current user, show all users, show me the next 10 minutes, write it to Session and then email it to me. When I have those, I have an incentive to write trace, and then when the code goes to production, the production admin will have a chance of diagnosing what is going on.
*

Posted in Trace.


Compiler Error 128

Many things cause compiler error 128. Ref, here.

Sometimes reregistering aspnet with iis works.

In my case, I had attached a console to a running asp.net app. Then I uploaded the correct release build over the top of that (the release build doesn’t attach a console) and then I got compiler error 128. It cleared up on iisreset. If in doubt, pull the power out.

Posted in ASP.NET.


Getting WCF to talk ordinary HTTP to a browser

This is an exercise in driving nails into the coffee table with your shoe. The goal isn’t really all that obviously beneficial and the tool isn’t the expected tool for the job. WCF wants to speak SOAP to SOAP aware clients. With the expansion to support a REST API with System.ServiceModel.Web, you can get a WCF service to talk to a browser. HOWEVER

* The browser doesn’t serialize complex objects to a C# like data type system on Request or Response. Instead you deal primarily in a raw Stream.
* Some browsers don’t speak XHTML (they will render it if you call it text/html, but MSIE will render xhtml/application as XML), so you can’t just return an X(HT)ML payload.
* WCF used this way is a “bring your own view engine” framework. I chose SharpDom for this exercise. It seems like it should be possible to support returning a SharpDom return value that serializes to XHTML with a type of text/html, but I don’t know how to do that.
* MVC already solves a lot of similar problems.

BUT with WCF you get some of those WCF features, like umm, well, when you have a browser client a lot of features aren’t avail (e.g. fancy transaction support, callbacks, etc), but you can still do fancy thinks like instancing, and supporting a HTML browser, JSON and WCF interface all on top of mostly the same code.

Just serving a page is fairly easy. Turn on web support in the config (same as any REST enabling, see end of post),

[WebGet]
public Stream HomePage(){ 
            //Return a stream with HTML
            //... I have skipped the view engine, I used SharpDom
            MemoryStream stream = new MemoryStream();
            TextWriter writer = new StreamWriter(stream, Encoding.UTF8);
            new PageBuilder().Render(model, writer);
            writer.Flush();
            stream.Position = 0;
            return stream;
}


What will the URL look like? Well in devepment in Win 7, if you don’t have admin rights, it will be something like:

http://localhost:8732/Design_Time_Addresses/HelloWorld/web/HomePage

The http://localhost:8732/Design_Time_Addresses/ is the address that a non-admin can register. It looks like you can’t register 8080.

The /web/ part is because in my endpoints in config (below), the endpoint is “web”

Also notice you have to set an encoding (and I suppose you’ll want that to match what the HTML meta tag says)

[WebInvoke(Method = "POST")]
public Stream AnotherPostBack(Stream streamOfData)
{
StreamReader reader = new StreamReader(streamOfData);
String res = reader.ReadToEnd();
NameValueCollection coll = HttpUtility.ParseQueryString(res);
//Return a stream of HTML
}

To invoke the above, use an METHOD of POST and an action of

http://localhost:8732/Design_Time_Addresses/HelloWorld/web/AnotherPostBack

And finally, use a web friendly host in your console app

using (WebServiceHost host = new WebServiceHost(typeof(HelloService)))
{
host.Open();
Console.ReadLine();
}

http://stackoverflow.com/questions/1850293/wcf-rest-where-is-the-request-data

Also, you can post back to this kind of operator… but for the life of me I can’t figure out how to get the Content. I can see the headers, I can see the content length, but I can’t get at the stream that holds the post’s content.

(This StackOverflow Q & A implies that to get the raw content, you have to use reflection to inspect private variables: )

[OperationContract(Action = "POST", ReplyAction = "*")]
[WebInvoke(Method = "POST")]
public Stream PostBack(Message request)
{
}

Obviously, cookies and URL params are just a matter of inspecting the IncomingRequest.

And the config:

<system.serviceModel>
    <services>
      <service name="WcfForHtml.HelloService" behaviorConfiguration="TestServiceBehavior">
        <host>
          <baseAddresses>
            <add baseAddress="http://localhost:8732/Design_Time_Addresses/HelloWorld"/>
          </baseAddresses>
        </host>
        <endpoint address="web"
                  binding="webHttpBinding"
                  contract="WcfForHtml.HelloService"
                  behaviorConfiguration="webBehavior">
        </endpoint>
      </service>
    </services>
      <behaviors>
        <!--SERVICE behavior-->
        <serviceBehaviors>
          <behavior name="TestServiceBehavior">
            <serviceMetadata httpGetEnabled="true" />
            <serviceDebug includeExceptionDetailInFaults="true"/>
          </behavior>
        </serviceBehaviors>
        <!--END POINT behavior-->
        <endpointBehaviors>
          <behavior name="webBehavior">
            <webHttp/>    
          </behavior>
        </endpointBehaviors>
      </behaviors>
  </system.serviceModel>

Posted in wcf.


Production Trace

Assume you work in a large organization, you write code, you really would like to see some diagnostic trace from your app in Test, Staging or Production, but a server admin owns all of them. You can’t have the event logs, remote desktop access, or ask that the web.config be edited to add or a remove a System.Diagnostics section. Just imagine.

Production trace needs to be:
- high performing, if it slows down the app, which may already be under load, not good.
- secure, since trace exposes internals, it should have some authorization restrictions
- not require change of code or config files, because large organizations often have paralyzing change management processes
- support a variety of listeners that will meet the requirements above (and if those listeners are write only, then a reader will need to be written)

System.Diagnostics -File-
- Perf- Not very performant, will likely have contention for the file.

System.Diagnostics-Console, DebugString, Windows Event Log
- You can’t see it. End of story.

ASP.NET Trace.axd and In Page
- Perf- not so good.
- Security- it’s well known, so security teams often disable it
- Config- Can sort of enable on a by page/by user basis if you use a master page or base page to check rights and maybe a query string.

Custom Session Listener
Write to session.
- Okay for single user trace. Need a page to dump the results.
- Perf probably okay, could hog memory.
- Security, pretty good, by default you can only see your own stuff.

Custom Cache Listener
- Write trace to cache
- Will have locking problems
- Won’t hog memory because of cache eviction
- Cache eviction could remove trace too fast.

HttpContext.Items listener +Base page to dump contents at end of request
- Only shows one page of trace at a time
- Probably high perf.
- Won’t show other users

Posted in Trace.


Cross Database Support

ADO.NET tried really hard to solve the cross database support problem. And the 2.0 version (or so), with System.Data.Common namespace does a pretty good job. But when I tried to support SQL and MS-Access, here is what I ran into:

Connection string management is a pain. If you are configuring an app to support MS-SQL and MS-Access (for a library app, in my case a hit counter), you need up to 6 connection strings:
1) Oledb Access – Because this is the old Cross-DB API of choice
2) ODBC Access – Because Oledb is deprecated & the new cross-DB API of choice
3) SQL Oledb – Same template, different provider
4) Native SQL – Some things have to be done natively, such as bulk import.

I need something more than a connection string builder, I need a connection string converter. Once I have the SQL native version, I should get the OleDB version and the ODBC version for free.

Next– ADO.NET doesn’t make any effort to convert the SQL text to and from one dialect to another, not even for parameters. So I write this code.

Cross DB When You Can, Native When you Have To
Some application features just require really fast inserts. For MS-SQL that means bulk copy. For MS-Access, that means single statement batches and a carefully chosen connection string. The System.Data.Common namespace lets you use factories that return either OleDB or native, but once they are created it is one or the other. What I wish there was, was a systematic way of the code checking for a feature and if it has it, use it, if it doesn’t fall back. Obviously this sort feature testing could be a real paint to write for some features, but for things like, say, stored procedures, why wouldn’t it be hard to check for stored proc support and when it exists, create a temp or perm stored proc to execute a command instead of just raw sql? I haven’t really figured out a way to implement this feature.

Are you Really Cross DB Compatible?
Of course I am. After every compile, I stop and test against all 14 database providers & configurations. Yeah right. If the application isn’t writing to the DB right now, I’m not testing it. So after I got MS-Access working, I got SQL working. MS-Access support broke. Then I got MS-Access going again. Then they both worked. Then I added a new feature with MS-SQL as the dev target. Then MS-Access broke. And so on.

ADO.NET executes one command against one database. What I need to prove that I have cross DB support is “mulit-cast”. Each command needs to be executed against two or more different databases to prove that the code works with all providers. And this creates a possible interesting feature of data-tier mirroring, a feature that usually requires a DBA to carefully set it up and depends on a provider’s specific characteristics. With multicast, you can do a heterogeneous mirror– write to a really fast but unreliable datastore and also write to a really slow but reliable datastore.

I plan to implement multi-cast next.

Posted in Matthew Martin.