Features worth searching for:
Log-by-level. E.g. info, warn, verbose, error.
Log-by-module/theme. E.g. MyClass, file1.js, Data, UI, Validation, etc. Sometimes called “groups” or other things.
Log-errors. Info, warn, verbose are all the same data type, but the error is a complex data type and the work flow differs dramatically from the others.
Log-to-different places, e.g. HTML, alert, console, Ajax/Server
Log-formatting. E.g. xml, json, CSV, etc.
Log-correlation. E.g. if you log to 2 places, say a client, server and web-service and db, and a transactions passes through all four, can you correlate the log entries?
Log-analysis. E.g. if you generate *a lot* of log entries, something to search/summarize them would be nice.
Semantic-logging. E.g. logging (arbitrary) structured data as well as strings or a fixed set of fields.
Analytics. Page hit counters. (I didn’t search for these)
Feature Usage. Same, but for applications where feature != page
Console. Sometimes as a place to spew log entries, sometimes as a place for interactive execution of code.

Repository Queries:
http://bower.io/search/?q=logging 25 entries right now.

https://www.nuget.org/packages?q=javascript+logging Nuget JS logging libraries.

https://nodejsmodules.org/tags/logging – Nodejs Repository query

Various Uncategorized Browser-Centric Libraries
https://github.com/enoex/Bragi-Browser category logger (here they are called “groups”)
https://github.com/better-js-logging/console-logger category & level logger.
https://github.com/latentflip/bows colorful logging by category
https://oaxoa.github.io/Conzole/ Side console.
https://github.com/structured-log/structured-log – Serilog/Structured Log
http://log4javascript.org/ Log4JavaScript – for people who like the log4x API. As of 2015, appears dated & unmaintained.
http://www.softwarementors.com/log4js-ext/ More upto date log4x library. Fancy on screen log.
https://github.com/stritti/log4js A level-logger. (Many features are IE only)
http://smalljs.org/logging/debug/ Console logger with module filters.
http://jsnlog.com/ Client side logger that sends events to popular server side logging libraries (more than just server side node)
http://www.songho.ca/misc/logger/logger.html on screen (HTML) logging overlay
https://github.com/jbail/lumberjack Monkeypatches the built-in console object.

Microlibraries for Browser
These might not be any smaller than other libraries.
https://github.com/bfattori/LogJS – supports local storage logging.
https://github.com/kapilkaisare/sleeper-agent 4 level logging with an on/off switch at runtime.
https://github.com/mattkanwisher/driftwood.js – 4 level logging with environment switches & ajax
https://github.com/pimterry/loglevel. 4 level logging Supports plugins.
https://github.com/cowboy/javascript-debug/tree/v0.4 console.log wrapper
http://js.jsnlog.com/ Same as JSN Log, but just the JS part, so it’s like a microlibrary.
https://github.com/nhnb/console-log – Polymer/web-component style console logging
https://github.com/icodeforlove/Console.js – Polyfill for pretty-console display?
https://github.com/Couto/groundskeeper – Build step to remove console.log entries before sending to production.

https://github.com/pockata/blackbird-js -Abandoned? Not sure what it does.
NitobiBug -Abandoned. If you look long enough you can find websites that serve up the file.

Browser plug ins
http://getfirebug.com/logging – Firefox centric.

Error Logging
Error logging is more than a print statement. Generally, at point of error you want to capture all the information that the runtime provides.
http://www.stacktracejs.com/ Stacktrace and more.
http://openmymind.net/2012/4/4/You-Really-Should-Log-Client-Side-Error/ Roll your own

Node Loggers (might work in Browser, not sure)
https://github.com/enoex/Bragi-Node (Node Centric)
https://www.npmjs.com/package/winston (Node Centric)
https://github.com/jstrace/jstrace (Node centric)
https://nodejs.org/en/blog/module/service-logging-in-json-with-bunyan/ Bunyan

Commercial Loggers
Often a opensource client that talks to a commercial server. No idea if these can work w/o the server component.





Dumb Services – Accessing a website when there isn’t an webservice API

The dumbest API is a C# WebClient that returns a string. This works on websites that haven’t exposed an asmx, svc or other “service” technology.

What are some speed bumps this presents to other developers, who might want to use your website as an API? The assumption here is that there is no coordination between

All websites are REST level X.
Just by the fact that the site works with web browsers and communicates over HTTP, at least some part of the HTTP protocol is being used. Usually only GET and POST, and the server returns a nearly opaque body. By that I mean, the mime type lets you know that it is HTML, but from the document alone, or even from crawling the whole website, you won’t necessarily programmatically discover what you can do with website. Furthermore, the HttpStatus codes are probably being abused or ignored, resources are inserted and deleted on POST, headers are probably ignored and so on.

Discovery of the API == Website Crawling.
If you could discover the API, then you could machine generate a strongly typed API, or at least at run time, provide meta data about the API. With a regular HTML page, it will have links and forms. The links are a sort of API. You can craw the website and find all the published URLs, and infer from their structure what the API might be. The Url might be a fancy “choppable” Url with parameters between /’s or it might be an old school QueryString with parameters as key value pairs after the ?.

You can similarly discover the forms by crawling the website. Forms at least will let you know all the expected parameters and a little bit about their data types and expected ranges.

If the website is JavaScript driven, all bets are off unless you can automate a headless browser. For a single page application (SPA), your GET returns a static template and a lot of JavaScript files. The static template doesn’t necessarily have the links or forms, or if it does, they are not necessarily filled in with anything yet. On the otherhand, if a website is an SPA, it probably has a real web service API.

Remote Procedure Invocation
Each URL represents an end point. The trivial invocations are the GETs. Invocation is a matter of crafting a URL, sending the HTTP GET and deciding what to do with the response (see below.)

The Action URLs of the forms. The Forms tell you more explicitly what the possibly parameters and data types are.

Data Serialization.
The dumbest way to handle the response from a GET or POST is a string. It is entirely up to the client to figure out what to do with the string. The parsing strategy will depend on the particular scenario. Maybe you are looking for a particular substring. Maybe you are looking for all the numbers. Maybe you are looking for the 3rd date. There in the worst case scenario, there is nothing a dumb service client writer can do to help.

The next dumb way to handle a dumb service response is to parse it as HTML or XML, for example with Html Agility Pack, a C# library that turns reasonable HTML into clean XML. This buys you less that you might imagine. If you have an XML document with say, Customer, Order, Order Line and Total elements, you could almost machine convert this document into an XSD and corresponding C# classes which can be consumed conveniently by the dumb service client. But in practice, you get an endless nest of Span, Div and layout Table elements. This might make string parsing look attractive in comparison. Machine XML to CS converters, like xsd.exe, have no idea what to do with an HTML document.

The next dumb way is to just extract the tables and forms. This would work if tables are being used as intended- a way to display data. The rows could then be treated as typed classes.

The next dumb way is to look for microformats. Microformats are annotated HTML snippets that have class attributes that semantically define HTML elements as being consumable data. It is a beautiful idea with very little adoption. The HTML designer works to make a website look good, not to make life easy for Dumb Services. If anyone cared about the user experience of a software developer using a site as a programmable API, they would have provided a proper REST API. It is also imaginable to attempt to detect accidental microformats, for example, if the page is a mess of spans with classes that happen to be semantic, such as “customer”, “phone”, “address”. Without knowing which elements are semantic, the resulting API would be polluted with spurious “green”, “sub-title” and other layout oriented tags.

The last dumb way I can think of is HTML 5 semantic tags. If the invocation returns documents, like letters and newspaper articles, then the elements header, footer, section, nav, or article could be used. The world of possible problem domains is huge, though. If you are at a CMS website and want to process documents, this would help. If you are at a travel website and want to see the latest Amtrak discounts, then this won’t help. I imagine 95% of possible use cases don’t include readable documents an important entity. Another super narrow class of elements would be dd, dl, and dt, which are used for dictionary and glossary definitions.

Can there be a Dumb Services Client Generator?
By that, I mean, how much of the above work could be done by a library? This SO question suggests that up to now, most people are doing dumb services in an ad hoc fashion, except for the HTML parsing.

  • The web crawling part: entirely automatable. Discovering all the GETs, and Forms is easy.
  • The meta-data inference part: Infering the templates for GET is hard, inferring the meta data for a form request is easy.
  • The Invocation part is easy.
  • The Deserialization part: Incredibly hard. Only a few scenarios are easy. At best, a library could give the developer a starting point.

What would a proxy client look like? The loosely typed one would for example, return a list of Urls and strings, and execute requests, again returning some weakly typed data structure, such as string, Stream, XML as if all signatures where:

string ExecuteGet(Uri url, string querystring)
Stream ExecuteGet(Uri url, string querystring)
XmlDocument ExecuteGet(Uri url, string querystring)

In practice we’d rather something like this:

Customer ExecuteGet(string url, int customerId)

At best, a library could provide a base class that would allow a developer to write a strongly typed client over the top of that.

Using Twitter more effectively as a software developer

FYI: I’m not a technical recruiter. I’m just a software developer.

Have a clear goal Is this to network with every last person in the world who knows about, say, Windows Identify Foundation? Or to make sure you have some professional contacts when your contract ends? Don’t follow people that can’t help you with that goal. If you have mixed goals, open a different account.

Important Career Moments Relevant to Twitter. Arriving town, leaving town and changing jobs, conferences, starting a new company– if you have a curated twitter list, it might help at those time points, or it might not, who knows.

At the moment, there are so many jobs for developers and so few jobs, that the real issue is not finding a job, but finding a job that you like. Another issue is taking control of the job hunting process. The head hunters most eager to hire you, have characteristics like, they make lots of calls per day and they have a smooth hiring pipeline. But there is no particular correlation with what sort of project manager is at the other end of that pipeline.

Goals: Helping Good Jobs Find Developers I’m talking about that day when your boss says, hey, do you know any software developers? And I say, no, I work in a cubicle where I talk to same 3 people 20 minutes a week. So that was a big part of my goal for creating a twitter following, so that in 3 years, bam, I can say, “Anyone want a job?” and it wouldn’t be just a message in the bottle dropped in the Atlantic. If you don’t care about the job don’t post it. If a colleague desperately needs to fill a spot for the worlds worst place to work, don’t post it, you’re not a recruiter, you got standards.

Twitter is a lousy place for identifying who is a developer and who is in a geographic region. After exhaustive search, I found less than 2000 people in DC who do something related to software development and of those, maybe 50% are active accounts. There must be more developers and related professions then that in DC– I guess 10,000 or 20,000.

Making Content: Questions. It works for newbie questions. Anything that might require an answer in depth is better on StackOverflow. And StackOverflow doesn’t want your easy questions anyhow.

Making Content: Discussion. It works for mini-discussions, of maybe 3-4 exchanges, tops. Consider doing a thoughtful question a day. Hash tag it, but don’t pick stupid hash tags, or hash tag spam. #dctech is better than #guesswhat Consider searching a hash tag before using it. Re-use good hash tags as much as possible to increase discussion around a hashtag.

Making Content: Jokes. It works really well for jokes. Now if you actually engage in jokes, that is a personal decision. They are somewhat risky. On the otherhand, if you never tells a joke, you’re a boring person who gets unfollowed and moved to a list.

Making Content: Calls to Action. I don’t practice this well myself because it’s hard to do in twitter. Most effective calls to action are some sort of “click this link”, hopefully because after I read the target page, I don’t just chuckle or say, “hmm”, but I do something different in the real world.

Making Content: Don’t do click bait. Not because it isn’t effective, it is effective in making people click. But everyone is doing it and it is junking up news feeds.

Building a Community: Who to Follow? Follow people you wish worked at your office. They may or may not post the content you like, but you can generally fix that by turning off retweets. If they still tweet primarily about stamp collecting, or tweet too much, put them on a list, especially if they don’t follow you back anyhow.

Building a Community: Finding people to Follow Twitter’s own search works best– search for keyword, limit to “people near me” and click “all” content.

Real people follow real accounts, usually. Real people are followed by 50/50 spambots and real people. Unfortunately, people follow stamp collecting and cat photo accounts, but are followed by friends, family and coworkers. If you are looking for industry networking opportunities, you care about the coworkers, not the stamp collecting and cat photo accounts.

Bio’s on twitter suck. People fill them with poorly thought out junk. I don’t care who you speak for, I don’t care if your retweets are endorsements. Put the funny joke in an ephemeral tweet, not the bio, followers end up re-reading your bio over and over. Include where you live, your job title and key works for what technologies you care about. Well, that’s what I wish people would do, but if you really want to put paranoid legal mumbo jumbo there, at least make sure that it aligns with your goals.

Building a Community: Getting Follow Backs. People follow back on initial follow, and sometimes on favorite and retweet.

Building a Community: Follow “dead” accounts anyhow. They might come back to life because you followed them. Who knows? It’s a numbers game.

Interaction: Retweet or Favorite? Favorite, means, “I hear you”, “I read that”, “I am paying attention to you”. Retweet means, “I think everyone of my followers really cares about this as much as they care about me.” People get this wrong so much I generally turn of retweet on every account I follow. I can still see those retweets should an account be on a list I curate.

Retweet what everyone can agree on, Favorite religion and politics. If someone says something you like, it’s a good time for engagement. But not if it means reminding everyone that follows you that after work hours, you are a Republican, Democrat or Libertarian. Favorites are comparatively discreet, the audience has to seek them out to find our what petition you favorited.

In practice, people Retweet when they should Favorite, junking up their followers news feeds with stamp collecting, radical politics, and personal conversations.

Interaction: Do start tweets targeted at one person with the @handle. It prevents that message from showing up in your followers feeds. Don’t automatically put the period in front, most people are gauging wrong when to thwart the build in filter system.

Know Your Audience. I have two audience, my intended audience of software developers in greater DC, and my unintended audience people who follow me because they agree with my politics, or are interested in the same technologies as me. I have a clear goal, so I know that the audience I’m going to cater to is the one that aligns with my goals. I can’t please everyone and if I wanted to, I would open a 2nd account.

Lists: Lists are for you. Don’t curate a list with the assumption that anyone cares. They don’t. Consider making lists private if you don’t think the account cares if they’ve been put on a list.

Lists: Create an Audience List The people I follow are great, but the people that follow me back are better. I put them on a private audience list because they don’t need a notification hearing that I’ve put them on an audience list.

People on my general list that don’t follow me back, I hope they will follow me back someday. The people on the audience list, I care about their retweets and tweets more because it’s just much more likely that I’ll get an interaction someday.

Lists: Create a High Volume Tweeter/”Celebrity” list. People who tweet nonstop junk up your feed, move them to a list unless they are following you back. “Celebrities” have 10,000s of followers but only a few people they follow. They probably won’t ever interact with you, but if they do, it will be via you mentioning them, not through a reciprocal follow relationship.

Things that went wrong trying to run pydoc

This is from the standpoint of a beginning python developer on windows. I assume you are like all other python developers and just stare at the machine to twiddle the electrons. These are my in progress notes on getting pydoc to run.

Install python. Install it over and over because it is just like Java, and .NET, dozens of side by side installs.
Install pip. It may or may not already be there.
Install pywin. I have no idea if it helps. My generators only generated for 3.3.
Install virtual environment so you don’t junk up the global interpreter with packages.

You will want to get c:\python33\c:\python33\scripts in the path. Set the path the old fashioned way. If using powershell, reload the environment over and over. (pywin is supposed to help for this)

$env:Path = [System.Environment]::GetEnvironmentVariable(“Path”,”Machine”)

ref: https://stackoverflow.com/questions/17794507/reload-the-path-in-powershell

Okay, check it like this

$env:Path.Split(“;”) | Where {$_.Contains(“yth”)}

The goal is to get all the sphinx cruft into a \doc\ folder. The whole mess will fight this every step of the way.

cd over to your source code folder (maybe) and run this. (I’m not sure about the best folder, initially I tried running from the c:\python33\source folder, which junks up the python directory. Which seems wrong.)


If it doesn’t run, your path is screwed up. This command creates a conf.py
When answering, most of the time you answer the default, except for the API doc stuff, which appears to be “n” by default. It appears that people use pydoc not just for python api documentation, but for autobiographies and they cater to that scenario first.

You’ll need to edit that conf.py

Install nano. You could instead open it with word, or some other inappropriate application. At least nano doesn’t kick you out of the console and it isn’t vim.

If the conf file lives in the same directory as your source code, uncomment this:

sys.path.insert(0, os.path.abspath(‘.’))

Or if the conf.py file is in a sub directory of your app’s directory, use two dots

sys.path.insert(0, os.path.abspath(‘..’))

Pause. Go edit your python files and add this to anything that will execute code (and not just define functions and classes) so you don’t get side effects when generating documentation. And add some comments of the for “”"blah”"” to your functions. I think this is the expected syntax for the doc comments.

if __name__ == ‘__main__’:

Create the rst files.

sphinx-apidoc -o api .

(maybe – . . would work better?)
I got modules.rst and foldername.rst. The modules.rst just has a reference to foldername.rst.

These .rst files are like editable “markdown” like things. They need further processing. Firstly, on my machine, when I run it, it adds the folder name as the package, and then I get a FolderName.filename package doesn’t exist error for each file on the make. So edit the .rst file and fix the package names.

Copy those .rst files into the root of your source control or where ever where ever conf.py is. I think. Put it in all the directories on the workstation if that doesn’t work. This is like working with java, nothing knows where anything is.

Run these, although I have no idea what they do:

sphinx-build -b html . build

.\make html

And it spits out some html. Cd to the build/html folder and run this to launch chrome.

Start-Process “chrome.exe” “index.html”

I have no idea if this works, my source folder and python folders are littered with crap created all over the place.

Official docs:
http://sphinx-doc.org/tutorial.html Quick start tutorial. If this works for you, then you are the sort of person that didn’t need docs in the first place. Sort of ironic.
https://codeandchaos.wordpress.com/2012/07/30/sphinx-autodoc-tutorial-for-dummies/ This is the more useful one, but not in itself enough to generate.
http://sphinx-doc.org/man/sphinx-apidoc.html You need this if you are using it for, I don’t know, python API documentation and not your autobiography.

TODO: Turn this into a .bat and .ps1 file.

Follow up Links
PyCharm has a built in understanding of Sphinx.
ref: Overview
ref: Python Integrated Tools Dialog.

REST Levels above 4

There is the Richardson Model of REST says REST APIs can be ranked like so:

1- Plain old XML. You serve up data in a data exchange format at HTTP endpoint. Ignores as much as possible about how HTTP was intended to work.
2- Resources have their own URL
3- Resources can be manipulated with GET, PUT, POST, DELETE
4- Resources return hypermedia, which contains links to other valid actions & acts as the state of the application.

I’ll add these:
5- Metadata. The API supports HEAD, OPTIONS, and some sort of meta data document like HAL
6- Server Side Asynch – There is support for HTTP 202 & an endpoint for checking the status of queued requests. This is not to be confused with client side asych. Server side asynch allows the server to close an HTTP connection and keep working on the request. Client side asych has to do with not blocking the browser’s UI while waiting for a response from the server.
7- Streaming – There is support for the ranges header for returning a resource in chunks of bytes. It is more like resumeable download, and not related to the chunk size when you write bytes to the Response. With ranges, the HTTP request comes to a complete end after each chunk is sent.

#5 is universally useful, but there isn’t AFAIK, a real standard.
#6 & #7 are really only needed when a request is long running or the message size is so large it needs to support resuming.

Clients should have a similar support Level system.
1 – Can use all verbs (Chrome can’t, not without JavaScript!)
2 – Client Side Caches
3 – Supports chunking & maybe automatically chunks after checking the HEAD
4 – Supports streaming with byte ranges

C# Basics– Classes and basic behavior

What upfront work is necessary to create fully defined classes? By fully defined, I mean, there is enough features implemented so that it play well with the Visual Studio debugger, WCF/WebAPI, Javascript and XML, and so on.

Ideally, this boilerplate would always be available to consume on your custom classes. In practices, it is uncommon to see any of the following implemented. Why is that?

So lets simplify reality and imagine that code is of only a few types:

(I’m suffixing all of these with -like to remind you that I’m talking about things that look like these, not necessarily the class or data structure with the same name in the .NET or C# spec or BCL)

  • Primative-like. Single value, appear in many domains, often formatted different in different countries. Sometimes simple, like Int32, sometimes crazy complicated like DateTime, sometimes missing, like “Money”.
  • Struct-like. Really small values, appear in some domains, like Latitude/Longitude pairs.
  • Datarow-like. Many properties,need to be persisted, probably stored in a relational or document database, often exchanged across machine, OS and organizational boundaries.
  • Service-like. These are classes that may or may not have state depending on the programming pradigm. They are classes with methods that do something, where as all the above classes, mainly just hold data and incidentally do something. It might be domain-anemic, like create, read, update and delete or it might be domain-driven, like issue insurance policy, or cancel vacation.
  • Collection-like. These used to be implemented as custom types, but with Generics, there isn’t as much motivation to implement these on a *per type* basis.
  • Tree or Graph-like. These are reference values that contain complex values and collection-like values and those turn also might contain complex values and collections.

All classes may need the following features

  • Equality- By value, by reference and by domain specific. The out of the box behavior is usually good enough and for reference types shouldn’t be modified. Typically if you do need to modify equality, it is to get by-value or by-primary-key behavior, which is best done in a separate class.
  • Ranking- A type of sorting. This may not be as valuable as it seems now that linq exists and supports .Sort(x=>…)
  • String representation- A way to represent this for usually human consumption, with features overlapping Serialization
  • Serialization- A usually two way means of converting the class into string, JSON, XML for persistence or communicating off machine
  • Factories, Cloning and Conversion- This covers creation (often made moot by IOC containers, which sometimes have requirements about what a class looks like), cloning, which is a mapping problem (made moot by things like automapper), and finally conversion, which is mapping similar types, such as Int to Decimal, or more like “Legacy Customer” to “Customer”
  • Validation- Asking an object what is wrong, usually for human consumption
  • Persistence- A way to save an object to a datastore. At the moment, this is nhibernate, EF, and maybe others.
  • Metadata- For example, the .NET Type class, an XSD for the serialized format, and so on.
  • Versioning- Many of the above features are affected by version, such as seralization and type conversion, where one may want to convert between types that are the same but separated by time where properties may have been added or removed. Round trip conversion without data loss is a type of a versioning feature.

How implemented

  • Ad hoc. Just make stuff up. Software should be hard, unpredictable and unmanageable. The real problem is too many people don’t want to read the non-existent documentation of your one-off API.
  • Framework driven. Make best efforts to find existing patterns and copy them. This improves your ability to communicate how your API works to your future self and maybe to other developers.
  • Interface driven. A bit old fashioned, but rampant. For example these:
    //Forms of ToString(), may need additional for WebAPI
    IFormattable, IFormatProvider, ICustomFormatter,
    //Sort of an alternate constructor/factory pattern
    IDisposable, //End of life clean up
    IComparable, IComparable, //Sorting
    //Competing ways to validate an object
    IValidatableObject, IDataErrorInfo,
    //Binary (de)serialization
    ISerializable, IObjectReference
  • Attribute driven. This is now popular for seralization APIS, e.g. DataContract/DataMember and for certain Validations.
  • Base Class- A universal class that all other classes derive from and implement some of the above concerns. In practice, this isn’t very practical, as most of these code snippets vary with the number of properties you have.
  • In-Class- For example, just implement IFormat* on your class. If you need to support 2 or more ways of implementing an interface, you might be better off implementing several classes that depending on the class you are creating features for.
  • Universal Utility Class- You can only pick one base class in C#. If you waste it on a utility class, you might preclude creating a more useful design heirarchy. A universal utility class has the same problem as a universal base class.
  • Code generation. Generate source code using reflection.
  • Reflection. Provide certain features by reflecting over the fields and properties.

All of these patterns entail gotchas. Someday when I’m smarter and have lots of free time, I’ll write about it.

Computer Operating System, Explain it like I’m Five

Explain it like I’m Five” is an internet meme. It isn’t meant to literal dumb it down to a five year old’s level. The assumption is that if you ask some one who understands something at a deep level to explain it in a way the lister can understand, they will undershoot and explain it in a way more appropriate for a more experienced audience.

Operating Systems
Metaphors and models are simplifications of reality that retain certain, but not all characteristics of reality.

So some typical metaphors for a computer are human bodies (the arms and legs are peripherals– the input and output– the brain is the CPU, operating system and applications). A better metaphor would be a human society– the input and output peripherals are organizations like the census and the post office. The various companies and stores are applications. All of these are orchestrated by laws, which work out the fundamental rules for the parts to interact.

The other way is via models. We name a few parts of the total, and establish some relationships between these parts. The relationships can sometimes be quite fuzzy. A computer consists of input, output, and a CPU. The CPU essentially does math and moves numbers around. Input takes signals from the outside world. Programs run on the CPU, by step by step doing arithmetic and moving the results around. If there was no input and output, the application wouldn’t really need an operating system. Many old style applications were responsible for memory management. But this application can’t run in the first place if a boot application doesn’t run. The boot application performs enough actions to get the computers memory in a state where it can start to run applications. Booting is called booting as a reference to the story about a guy who got himself up on a roof by pulling up on his own bootstraps. The computer’s boot routine likewise is attempting to get the computer in a state where it can execute applications, but it itself is also an application! These boot routines are part of the hardware. After the application begins to run, it needs to communicate with the input and output. These functions are normally provided by the operating system and modern OS’s also take care of a lot of memory management. Because at the instruction level, all applications look like arithmetic and moving numbers around in memory, it’s some what arbitrary to saw where applications end and where the OS functions begin, as illustrated by the lawsuit between Microsoft and the US government over bundling an internet browser into the operating system.

The point of this explanation isn’t to allow you to build your own computer or operating system. The point is to give you a mental model that doesn’t require getting a computer science degree. (And even to get that computer science degree, at some point you will need to put together some internal mental models of computers)

For further reading, see Petzolds’ “Code“, which moves from logical switches through the entire hardware stack to explain how a computer works, including the basic operating system functions. At some points, the author successfully dumbs it down, sometimes he gets bogged down in what maybe irreducibly complex. I’m personally optimistic about the ability to dumb any concept down to a point where you can get simplified and useable mental models. An example would be calculus, which started out as something only the top mathematicians could do. High school calculus textbooks have since figured out ways to dumb it down so that ordinary people can do calculus. As for proving the calculus works, which seems irreducibly complex, you can read Berlinksy’s A Tour of the Calculus, which is a sort of Calculus for literature majors– not enough calculus to build bridges or prove that it works, but enough to have a usable mental model, or at least find out it is worth studying any further.

DC Agile Software Management Book Club

Okay, I’m at it again. I’ve created a book club to replace another that had gone into hibernation. This one is going to be for these audiences:

- PM’s broadly defined (product managers, program managers, in short “bosses of software developers”)
- Tech leads (The senior developer who has no official organizational power)
- Developers on teams
- Developers who happen to have a boss

Topicwise, it will cover

- Agile software development (as opposed to SDLC, CMMI, waterfall and other process-above-all exercises)
- Software development management
- Teamwork
- Personal productivity using techniques borrowed from the world of software development management

Here is the book list:

1) Kanban, David J. Anderson $10
2) Managing the Unmanageable: Rules, Tools, and Insights for Managing Software People and Teams by Mickey W. Mantle and Ron Lichty, $17 (watch out for similarly titled book!)
3) Notes to a Software Team Leader: Growing Self Organizing Teams , Osherove, $23
4) Essential Scrum: A Practical Guide to the Most Popular Agile Process, Rubin $20
5) Team Geek: A Software Developer’s Guide to Working Well with Others Brian W. Fitzpatrick $10
6) Peopleware $17
7) Mythical Man Month, $20, $5 used
8) Software Estimation: Demystifying the Black Art , $17
9) Personal Kanban: Mapping Work | Navigating Life, Tonianne DeMaria Barry , Jim Benson, $10
10) Smart and Gets Things Done, $10
11) The Dream Team Nightmare: Boost Team Productivity Using Agile Techniques, Portia Tung (note: choose your own adventure book), $13
12) Cracking the PM Interview: How to Land a Product Manager Job in Technology, Gayle Laakmann McDowell, $10
13) Slack,Getting Past Burnout, Busywork, and the Myth of Total Efficiency, $8,/ $0.01

Writing code for technical interviews, my opinions so far

I like the idea of asking a candidate to write code during an interview, usually I ask a really simple question like FizzBuzz and I tell them what the mod operator is. I doing give a hoot if a candidate knows the mod operator, you rarely use it in line of business applications. I what to know if they can combine a loop and an if block in a reasonable way. If they can, maybe they are incompetent. If they can’t, they are incompetent. This isn’t a silver bullet, the completion of any single programming exercise doesn’t say that much about how they will be able to cope with the technical and social dysfunctions of your organization, which can be more important than mere technical competencies.

What people have asked me to do
Create a database, tables, a stored procedure for read and insert, and create a UI to do the insert and read. If I remember correctly, it was for a Personnel table. The trick was, this one one of about 4 programming exercises for a 2 hour exam. After doing it once, I realized, you could only do it in the time limits if you chose drag and drop techniques like SqlDataSource, and other techniques that leaned on Visual Studio’s code generation. Except all of those techniques (strongly typed XSDs, DataSet driven database development, SqlDataSource that binds to a UI Grid with no middle component) are out of fashion, deprecated and considered harmful for production code. And to do it in 30 minutes, you’d have to have practiced in advanced. If they wanted me to do a code Kata and for it to look as smooth as a on-stage conference demo, I’d have to practice like conference presenters do.

I didn’t get that job and the recruiter said the client rejected a whole batch of developers.

Next job interview. I got a multiplechoice test, and then was told to create a db, create a table, populate it with sample data, write two views (or a stored procedure that used subqueries). I could use the internet. But only 1 instance of sql server out of 3 on the machine were running. They didn’t give me a connection string. I switch to Ms-Access. But because of MS-Office’s restrictions, the guest account could not load featuers that involved VBA, so no QueryDefs. I almost got a gridview and sqldatasouce up and running, but without a SQL Designer, I was writing SQL by hand. SSMS was also broken because the trial license had expired. I ran out of time to prove my SQL would work.

(And there was malware on the machine that injected ads on all sites and then involuntarily redirected you away from a page after a few minutes, so visiting MSDN to check syntax, which they explicitly permitted, was borked until I disabled the malware add on in chrome. I notified them about it and they said it was a junker laptop and they didn’t care)

I was also told to write a REST ready WCF service. Speaking from experience, to set that up and prove it works takes a day. Anyone can right click and create a SCV file and add some attributes. But to demonstrate that it all works you need to:

- Machine generate a ServiceModel section of web.config. Save it off to a separate file so as it evolves you can easily revert back to a working ServiceModel section.
- Create a console app for the host and client. Some bindings aren’t easily testible with a Cassinni or IIS hosted service.
- Create tests that separate failures of the underlying code from the service. Also make sure you have a way to test the basic binding so you can differentiate between misconfigured advanced bindings from other errors.
- Create a System.Diagnostics section to turn on and off the WCF trace.
- Since this is a rest API
- Create a template for the Jquery service call, which is about 50 lines of code (most of those lines are success, failure and options settings), but it’s the same for most service calls.
- Verify that the JSON serializes and deserializes the way you want it. You may need to pass the data as string and use JSON.stringfy/parse to deal with the JSON
- Configure IIS to respond to PUT and DELETE as as well as GET and POST.
- Verify that routing and URL are working and that a variety of plausible templates don’t route to the wrong place.
- If this was a real world application, a similar amount of effort, equal to the effort described so far will be spent trying to get your organization to punch holes in the firewalls, to get an exotic single sign on server to talk to your app and to figure out why the services work on machine a, b and c but not d, e, and f.

Instead of conveying my knowledge of that, I was able to convey, with my stupid incomplete SVC file, that I was a rather dim witted developer who was probably faking it.

Interview Zen
This was on the website http://www.interviewzen.com — Solve a programming problem with a screen recorder running, but do so in a browser text box without intellisense or unit test tools. This is sort of unfair. So I solved it in Visual Studio and commented out the dumb lines of code I wrote so they could see how it evolved. This was the most fair test so far, but since I wasn’t going to do this on billable hours, I had to wait until a quiet period on the weekend. Take home programming quizzes will select for people who are not currently working. But people who aren’t currently working and have lots of spare time are also signaling that they aren’t the best candidates! (Or it signals they have slack time at their current job, which is fine, or it signals they don’t give a hoot about productivity at their current job, which is not) So doing good on a take home quiz, is a mixed message.

A Modest Proposal
If you want to ask something more advanced than FizzBuzz, then let them take the test home, let them use all available tools, let them use their. If you are worried they will just get answers from StackOverflow, make the test hard enough to require programming talent even if you were to post the questions on StackOverflow.

Javascript Intellisense, Pretending to Compile JS

So Visual Studio 2010 is reporting everything has the same methods, the methods of the JavaScript Object.

Getting Intellisense to work
1) Maybe the Telerik ScriptManager stomped it. The ScriptManager has to be an Asp:ScriptManager and nothing else. (As of VS2010)
2) Maybe VS wants a Ctrl-Shift-J (manually force a JS intellisense update)
3) Maybe there is a “compile” error.
– Check the Error List tab and look at the yellow warnings.
– Check the General output window, especially after doing a Ctrl-Shift-J
– Try to reformat the code. If it doesn’t reformat, Visual Studio probably can’t “compile” and doesn’t know what to do
– Look for green squiggles
4) Maybe the JS wants to be in its own file. I’ve seen broken intellisense start working after the code was moved from an aspx to a .js file
5) Maybe the the annotations (the fake reference at the top of a JS page) are in the wrong order. For example, if you are working with Telerik, the Ajax reference should be first, Telerik stuff next, your own code later and it should be in order of dependency.
6) Maybe you haven’t added enough annotations (especially the fake “references”, but also summary, param, returns annotations)
7) Maybe you used a golden nugget i.e. <%= Foo() %> and put it in a JS block (on an ascx or aspx page of course.) The VS Javascript parser see this as JS and tries to treat it as malformed JS. When you can cheaply quote it, quote it.
- e.g. var foo =”<%= Foo() %>“; // Just a string.
- e.g. var bar =parseInt(“<%= Bar() %>“,10); //Convert to int
- e.g. var bar =”<%= TrueOrFalse().ToLower() %>“===”true”; //Convert to bool
- but maybe/maybe not e.g. eval(“<%= GenerateJS() %>“); //This isn’t a nice solution because you are doing an unnecessary, expensive, logic changing eval just to keep intellisense from breaking.
8) Watch this space, I still haven’t gotten page method intellisense to show up reliably.
9) ScriptMode appears to affect intellisense. ScriptMode=”DEBUG” has better intellisense, but literally 1000x worse performance for browser execution, especially on IE.

And an mistake to avoid especially for ASP.NET developers
<%= Foo() %> syntax does not work in a .js file. .js files are static and not processed by the ASP.NET templating engine.
JS values written to the screen are initial values. Once they are written, they might as well be static. The JS is code generated on the server, but executed on the client.
var now = ‘<%= DateTime.Now.ToString() %>‘ ; // This isn’t going to change.
If you call page methods, they return immediately, the call back happens a few seconds later.
If you page methods blow up, Global Asax will not get an error event, so you have to use try/catch in your Page Method.
If a page method blows up, it may start erroneously reporting “Authentication Failed” errors. I think this is some version of a WCF style logic, where a client can go into a “faulted” state and just refuse to behave there-after. Still a theory.
On a single page application (SPA), var === Session. In a multi-page ASP.NET application, you constantly store state in Session because values don’t live past the life of a page. In a single page application, your user doesn’t change pages. So a page variable is Session. It never times out.
All parameters of your page methods are user input. In server side programming, you might grab a value from the database, store it in Session and use it later to save a record. In the SPA scenario, that value is handed over to the user and they can change it before it is submitted back to the page method. The level of difficulty is not especially high. So as values pass from server to JS page and back, they will have to be re-validated. Even if you try to keep the values on the server alone, eventually the user will be given a choice of values, and on the page method, you’d want to validate that these values were on the list.