Adapting a template for a resume

For various reasons, I’ve become way to familiar with the technologies associated with creating resumes. For the record, the coolest are: CV Maker, LaTeX resume templates, JsonResume and just about anyone’s HTML5 resume template. The HTML5 templates are written by people who actually have artistic taste, so they look beautiful. No way could I do the same in a short time, so I bought a template. (Never mind that the Template store’s UI let me buy the wrong template before I bought the right one, let’s focus on the happy parts of this experience.)

To use it for my self, I had to:

Assemble the raw material for a resume. StackOverflow Careers is my ground truth for resume data. From there I copy it to USAJobs and so on.

Load it up in Intellij Visual Studio with Resharper is not too bad, but if you just use Intellij, you get all the goodness that Resharper was giving you and more.

Disable the PHP mailer A contact form is just a spam channel. Don’t ask me why spammers think they can make money sending mail to web admins (unless maybe it’s spear phishing). I considered not showing my email address, but the spam harvesters already have my email address and google already perfectly filters my spam.

Strip out the boiler plate. Every time you think to got it all, there is more references to John Doe.

Fix the load image. The load image was waiting for all assets to render before it would remove the annoying spinner and curtain. But the page did not have any elements that the user might interact with too early. The page didn’t have any flashes of unstyled content like you see with Angular. There weren’t any UI element suddenly snapping into place on the document ready event like a certain app I’ve worked on before.

Deminimize the code. This should be easy, right? A template has to ship with the JavaScript source code. But the code was minimized. So I pointed the page to the non-minimized version. The whole page broke. Finally I noticed the minimized file actually contained version 1.2 and the non-minimized shipped was 1.0. So I deminimized the code and could begin removing the extraneous load image.

Upload to my hosted Virtual Machine . Filezilla all of a sudden decides it can connect, but can’t do anything. Some minutes later, I figure out that TunnelBear, my VPN utility and Filezilla don’t play well together. So I added an exception for my wakayos.com domain.

Write blog post. I just wanted my resume to be a nice container. But as a developer, it sort of looks like maybe I wrote this from scratch. I certainly did not.

What a Build Master should know or do as a Build Master.

An automated build is beautiful. It takes one click to run. The clicking on the run button is completely deskilled. It can be delegated to the cat.

Setting up and troubleshooting a build server on the other hand is unavoidably senior developer work, but I encourage everyone to start as soon as they can stomach the complexity.

Does it compile?

A good build pays close attention to the build options as a production release will have different options from your workstation build. If it builds on your machine, you may still have accidentally sabotages performance and security on the production server. Review all the compilation options.

In the case of C#, the rest of the build script is a csproj file, which is msbuild code, which is executable xml. You don’t need to know about how it works until stuff breaks and then you need to know enough msbuild to fix it. Also, because the build server sometimes doesn’t or can’t have the IDE on it, the msbuild script might be the only way to modify how the build is done.

The TFS build profile is written in XAML, which again is executable XML. Sometimes it has to be edited, for example if you want to get TFS to support anything but ms-test. Fear the day you need to.

Technologies to know: msbuild, IDEs (VS20XX), TFS GUI, maybe XAML, possibly JS and CSS compilers like TypeScript and SASS

Is it fetching source code correctly, can it compile immediately after check out to a clean folder?

When there are 50 manual steps to be done after checking code out before you can compile, the build master must fix all of these. Again, it builds on the workstation, but all that proves is that you have a possibly non-repeatable build.

Maybe 90% of the headaches have to do with libraries, or nowadays, repo managers, like nuget, bower, npm, etc. A sloppy project makes no effort to put dependencies into source control and crappy tools means the build server or build script is unawares of the library managers.

Technologies to know: tfs, nuget, bower, npm, your IDE

What is “good” as far as a build goes?

A good build server is opinionated and doesn’t ship what ever successfully writes to a hard drive. Depending on the technology, there isn’t even such a thing as compilation. Those technologies have to be validated using lint, unit tests and so on. These post build steps can either be failing or non-failing post-build tasks. If they don’t fail the build, then often they are just ignored. Failing unit tests should fail a build. Other failing tasks, probably should fail a build, even if they aren’t production artifacts. I usually wish I could fail a build on lint problems, but depending on the linter and the specific problems, sometimes there just isn’t enough time to address (literally) 1,000,000 lint warnings.

Technologies to know: mstest, nunit, xunit, and other unit test frameworks for each technology in your stack.

Who fixes the failing tests? Who fixes the bugs?

The build master, depending on the organization and how dysfunctional it is, is either completely or partially responsible for fixing the build. There is no way to write a manual for how to do this. Essentially, as a build master, you have to dig into someone else’s code and demonstrate they broke the build and are obliged to fix it, or quietly fix it, or what ever the team culture allows you to do.

Technologies to know: nunit test, debugging, trace

We got a good build, now what? Process.

Not so fast! Depending on the larger organization policies with respect to command and control, you may need to get a long list of sign offs from people before you can deploy to the next environment. Sometimes you can have the build serve deploy directly to the relevant environment, sometimes it spits out a zipped package to be consumed by some sort of deployment script. Usually though, the build server can’t deploy directly to production due to air gaps or cultural barriers.

Technologies to know: Jira or what ever issue tracker is being used.
Non-technologies to know: your organizations official and informal laws, rules and customs regarding deployment.

The Grand Council of Release Poobahs and your boss said okay, now what?
This step is often the most resistant to automation. It often has unknowable steps, like filling in the production password, production file paths and IP addresses.

MsBuild supports no less than two xml transformation syntaxes for changing xml config for each environment.

It may be advisable for environments you know about to do enviornment discovery. It’s either wonderful or an easy way to shoot yourself in the foot. When you know the target server is a Windows2008 Server and on such servers it must do X and on Win 7 workstations it must do Y, don’t forget to think about the Windows 10 machine that didn’t exist when you wrote your environment discovery code. Maybe it should blow up on an unknown machine, maybe it should

Technologies to know: batch, powershell, msdeploy, MS Word
Non-technologies to know: your organizations official and informal laws, rules and customs regarding deployment.

Optional Stuff

Build servers like TFS also have built into them bug trackers, requirements databases, SSRS, SSAS (Analysis Services), and build farm things. They are all optional and each one is a huge skill. SSAS alone requires the implanting of a supplemental brain so you can read and write MDX queries.

Also, optional, is learning how other build servers work. No single build server has won all organizations, so you will eventually come across TeamCity, Lunt Build, etc.

Error Messages: On_Error_Insult_User

In the good ole days, if we hit an error message, we’d just write to the console a routine insult to the user.

If input_arg<0 then Print “You are such a dumb ass”

We now hide our contempt in secret code

If input_arg<0 then Print “Please contact your administrator”
/* You are such a dumb ass */

Sometimes we prefer the “I’m smarter than the runtime” pattern and send developers on a goose chase.

Commenting Code

It is important to comment your code for maintenance developers benefit.

//instantiate object
Foo fooObject = new Foo();

This is important because, maintenance developers are unlikely to recognize the most common single line code pattern in object oriented development.

//call a method in the class
fooObject.SomeMethod(1,2,3);

This likewise is important because maintenance developers are unlikely to grasp the principle of methods and invocations.

//I’m smart and you’re stupid.

This is important because any maintenance developer that doesn’t grasp object instantiation and method invocation probably will need to be explicitly filled in on your utter lack of respect.

Keep in mind, that according to accurately compiled statistics, 61%* of all code will be debugged and maintained by the original developer.

API’s I don’t Like

WMI. Relies on barely discoverable magic words (Namespace paths). Uses SQL as metaphor, but doesn’t actually have a proper database or relational structure behind it.

ADSI. Incomprehensible. Can’t easily play around with it as local authentication stores don’t act like active directory. And mere developers don’t usually have rights to Active Directory. Hence, no path to gaining competence.

Win32. Not friendly to languages other than C++. Incomprehensible.

Crypto API. Incomprehensible.

Javascript. Generally undiscoverable, but getting better with some IDE’s.

API’s that are better

COM, when early bound. Examples, ADO classic. Somewhat discoverable. When late bound relies on barely discoverable magic words (construction strings, method calls). Not friendly to languages that weren’t built specifically to be COM friendly.

WSDL Web Services. Not friendly unless you are using tools that ‘do it all for you’

API’s I like

REST. Plays very well with all programming languages that can make a GET request and receive an HTTP response.

.NET Framework. Highly discoverable, documentation strategy is build into the framework.

Patterns

  • Discoverable metadata. APIs that could support intellisense if the IDE supported it are good. APIs that make it too hard for IDE’s to support intellisense are bad.
  • Independent. APIs that have a fierce registration burden—I don’t like.
  • Built in documentation. APIs should have javadoc type documentation features
  • Aspect Orient Programming Features. API’s that automatically support Trace/Debug/Logging are good.

Pedagogical API’s

This summer my project was to teach my son to program, (as it was last summer).  We created a SMS text message translator, (eg. CU L8R=see you later), a version of Hammurabi, a primitive predator-prey population simulator, and got started on an MMO as a way to explore object oriented programming.  This year, we wrote our programs from scratch, as opposed to–mostly–copying them from a book.  Last year, we translated code from an old VB for kids book into C#

Whilst writing these programs with my son, I realized I didn’t have quite the API’s I wish I had.

I wish there were some API’s that were domain specific, like Hammurabi-type-game specific.  Most of the games were easy, but had one or two damn hard parts. For example, creating a Dice class that works in ASP.NET turned out to be a major distraction from the more relevant task of understanding the difference between fields, methods, if blocks, for blocks, as so on.

If there was such a pedagogical API, it would have a bunch of functions suitable that would have desirable quantities (going up, peaking, going back down), without a multi-hour detour into how 2nd degree polynomials work. 

Anyhow, if I ever get the time I re-write these mini-applications, factor them into the hard parts and the easy parts and publish them as pedagogical API’s.

I can tell you as a geek who learned programming as a child, I’d rather have had API’s in C#, than a funky kid specific language.  Even as a ten year old I knew that logo was for sissies and real code was in Atari Basic.

Wow! I have to say I’m impressed with Microsoft

In the good ole days of COM, I didn’t have the platform or the audience to let anyone know that COM was a technology only a mother could love. So unusable and arcane–it was elitist.

Now when I blog, I’ve gotten response from project managers at Microsoft. Jeffery Snover (a PM for powershell) replied to one of my posts with my notes about batch versus .net code versus powershell. And now, after getting thoroughly ticked off after a few days worth of work and couple of dollars trying to implement Infocard/Cardspace, I got a response from Kim Cameron, the PM for Infocard–(detailed, even!)