How to do postbuild tasks painlessly in Visual Studio

So, you’d think you’d need to write custom msbuild targets or even learn msbuild syntax. Wrong!

(okay, some of this can be done via the project properties, the “Build Events” tab. The “Build Events” tab hides the fact that you are writing msbuild code and now you have batch file code embedded into a visual property page instead a file that can be check into source control, diffed or run independently.)

You will have to edit your csproj file.

1. Create a postbuild.bat file in the root of your project. Right click for properties and set to “always copy”. A copy of this will now always be put in the bin\DEBUG or bin\RELEASE folder after each build.

2. Unload your csproj file (right click, “Unload project”), right click again to bring it up for edit.

3. At the very end, find this:
<Target Name="AfterBuild">
<Exec Command="CALL postbuild.bat $(OutputPath)" />

The code passes the \bin\DEBUG or \bin\RELEASE path to the batch file. You could pass more msbuild specific variables if you need to.

Strangely, the build output window will always report an error regarding the first line of the batch file. It will report that SET or ECHO or DIR or whatever isn’t a recognized command. But the 2nd and subsequent lines of the batch file run just fine.

From here you can now call out to powershell, bash, or do what batch files do.

Features worth searching for:
Log-by-level. E.g. info, warn, verbose, error.
Log-by-module/theme. E.g. MyClass, file1.js, Data, UI, Validation, etc. Sometimes called “groups” or other things.
Log-errors. Info, warn, verbose are all the same data type, but the error is a complex data type and the work flow differs dramatically from the others.
Log-to-different places, e.g. HTML, alert, console, Ajax/Server
Log-formatting. E.g. xml, json, CSV, etc.
Log-correlation. E.g. if you log to 2 places, say a client, server and web-service and db, and a transactions passes through all four, can you correlate the log entries?
Log-analysis. E.g. if you generate *a lot* of log entries, something to search/summarize them would be nice.
Semantic-logging. E.g. logging (arbitrary) structured data as well as strings or a fixed set of fields.
Analytics. Page hit counters. (I didn’t search for these)
Feature Usage. Same, but for applications where feature != page
Console. Sometimes as a place to spew log entries, sometimes as a place for interactive execution of code.

Repository Queries: 25 entries right now. Nuget JS logging libraries. – Nodejs Repository query

Various Uncategorized Browser-Centric Libraries category logger (here they are called “groups”) category & level logger. colorful logging by category Side console. – Serilog/Structured Log Log4JavaScript – for people who like the log4x API. As of 2015, appears dated & unmaintained. More upto date log4x library. Fancy on screen log. A level-logger. (Many features are IE only) Console logger with module filters. Client side logger that sends events to popular server side logging libraries (more than just server side node) on screen (HTML) logging overlay Monkeypatches the built-in console object.

Microlibraries for Browser
These might not be any smaller than other libraries. – supports local storage logging. 4 level logging with an on/off switch at runtime. – 4 level logging with environment switches & ajax 4 level logging Supports plugins. console.log wrapper Same as JSN Log, but just the JS part, so it’s like a microlibrary. – Polymer/web-component style console logging – Polyfill for pretty-console display? – Build step to remove console.log entries before sending to production.

Abandonware -Abandoned? Not sure what it does.
NitobiBug -Abandoned. If you look long enough you can find websites that serve up the file.

Browser plug ins – Firefox centric.

Error Logging
Error logging is more than a print statement. Generally, at point of error you want to capture all the information that the runtime provides. Stacktrace and more. Roll your own

Node Loggers (might work in Browser, not sure) (Node Centric) (Node Centric) (Node centric) Bunyan

Commercial Loggers
Often a opensource client that talks to a commercial server. No idea if these can work w/o the server component.

Using Twitter more effectively as a software developer

FYI: I’m not a technical recruiter. I’m just a software developer.

Have a clear goal Is this to network with every last person in the world who knows about, say, Windows Identify Foundation? Or to make sure you have some professional contacts when your contract ends? Don’t follow people that can’t help you with that goal. If you have mixed goals, open a different account.

Important Career Moments Relevant to Twitter. Arriving town, leaving town and changing jobs, conferences, starting a new company– if you have a curated twitter list, it might help at those time points, or it might not, who knows.

At the moment, there are so many jobs for developers and so few jobs, that the real issue is not finding a job, but finding a job that you like. Another issue is taking control of the job hunting process. The head hunters most eager to hire you, have characteristics like, they make lots of calls per day and they have a smooth hiring pipeline. But there is no particular correlation with what sort of project manager is at the other end of that pipeline.

Goals: Helping Good Jobs Find Developers I’m talking about that day when your boss says, hey, do you know any software developers? And I say, no, I work in a cubicle where I talk to same 3 people 20 minutes a week. So that was a big part of my goal for creating a twitter following, so that in 3 years, bam, I can say, “Anyone want a job?” and it wouldn’t be just a message in the bottle dropped in the Atlantic. If you don’t care about the job don’t post it. If a colleague desperately needs to fill a spot for the worlds worst place to work, don’t post it, you’re not a recruiter, you got standards.

Twitter is a lousy place for identifying who is a developer and who is in a geographic region. After exhaustive search, I found less than 2000 people in DC who do something related to software development and of those, maybe 50% are active accounts. There must be more developers and related professions then that in DC– I guess 10,000 or 20,000.

Making Content: Questions. It works for newbie questions. Anything that might require an answer in depth is better on StackOverflow. And StackOverflow doesn’t want your easy questions anyhow.

Making Content: Discussion. It works for mini-discussions, of maybe 3-4 exchanges, tops. Consider doing a thoughtful question a day. Hash tag it, but don’t pick stupid hash tags, or hash tag spam. #dctech is better than #guesswhat Consider searching a hash tag before using it. Re-use good hash tags as much as possible to increase discussion around a hashtag.

Making Content: Jokes. It works really well for jokes. Now if you actually engage in jokes, that is a personal decision. They are somewhat risky. On the otherhand, if you never tells a joke, you’re a boring person who gets unfollowed and moved to a list.

Making Content: Calls to Action. I don’t practice this well myself because it’s hard to do in twitter. Most effective calls to action are some sort of “click this link”, hopefully because after I read the target page, I don’t just chuckle or say, “hmm”, but I do something different in the real world.

Making Content: Don’t do click bait. Not because it isn’t effective, it is effective in making people click. But everyone is doing it and it is junking up news feeds.

Building a Community: Who to Follow? Follow people you wish worked at your office. They may or may not post the content you like, but you can generally fix that by turning off retweets. If they still tweet primarily about stamp collecting, or tweet too much, put them on a list, especially if they don’t follow you back anyhow.

Building a Community: Finding people to Follow Twitter’s own search works best– search for keyword, limit to “people near me” and click “all” content.

Real people follow real accounts, usually. Real people are followed by 50/50 spambots and real people. Unfortunately, people follow stamp collecting and cat photo accounts, but are followed by friends, family and coworkers. If you are looking for industry networking opportunities, you care about the coworkers, not the stamp collecting and cat photo accounts.

Bio’s on twitter suck. People fill them with poorly thought out junk. I don’t care who you speak for, I don’t care if your retweets are endorsements. Put the funny joke in an ephemeral tweet, not the bio, followers end up re-reading your bio over and over. Include where you live, your job title and key works for what technologies you care about. Well, that’s what I wish people would do, but if you really want to put paranoid legal mumbo jumbo there, at least make sure that it aligns with your goals.

Building a Community: Getting Follow Backs. People follow back on initial follow, and sometimes on favorite and retweet.

Building a Community: Follow “dead” accounts anyhow. They might come back to life because you followed them. Who knows? It’s a numbers game.

Interaction: Retweet or Favorite? Favorite, means, “I hear you”, “I read that”, “I am paying attention to you”. Retweet means, “I think everyone of my followers really cares about this as much as they care about me.” People get this wrong so much I generally turn of retweet on every account I follow. I can still see those retweets should an account be on a list I curate.

Retweet what everyone can agree on, Favorite religion and politics. If someone says something you like, it’s a good time for engagement. But not if it means reminding everyone that follows you that after work hours, you are a Republican, Democrat or Libertarian. Favorites are comparatively discreet, the audience has to seek them out to find our what petition you favorited.

In practice, people Retweet when they should Favorite, junking up their followers news feeds with stamp collecting, radical politics, and personal conversations.

Interaction: Do start tweets targeted at one person with the @handle. It prevents that message from showing up in your followers feeds. Don’t automatically put the period in front, most people are gauging wrong when to thwart the build in filter system.

Know Your Audience. I have two audience, my intended audience of software developers in greater DC, and my unintended audience people who follow me because they agree with my politics, or are interested in the same technologies as me. I have a clear goal, so I know that the audience I’m going to cater to is the one that aligns with my goals. I can’t please everyone and if I wanted to, I would open a 2nd account.

Lists: Lists are for you. Don’t curate a list with the assumption that anyone cares. They don’t. Consider making lists private if you don’t think the account cares if they’ve been put on a list.

Lists: Create an Audience List The people I follow are great, but the people that follow me back are better. I put them on a private audience list because they don’t need a notification hearing that I’ve put them on an audience list.

People on my general list that don’t follow me back, I hope they will follow me back someday. The people on the audience list, I care about their retweets and tweets more because it’s just much more likely that I’ll get an interaction someday.

Lists: Create a High Volume Tweeter/”Celebrity” list. People who tweet nonstop junk up your feed, move them to a list unless they are following you back. “Celebrities” have 10,000s of followers but only a few people they follow. They probably won’t ever interact with you, but if they do, it will be via you mentioning them, not through a reciprocal follow relationship.

Computer Operating System, Explain it like I’m Five

Explain it like I’m Five” is an internet meme. It isn’t meant to literal dumb it down to a five year old’s level. The assumption is that if you ask some one who understands something at a deep level to explain it in a way the lister can understand, they will undershoot and explain it in a way more appropriate for a more experienced audience.

Operating Systems
Metaphors and models are simplifications of reality that retain certain, but not all characteristics of reality.

So some typical metaphors for a computer are human bodies (the arms and legs are peripherals– the input and output– the brain is the CPU, operating system and applications). A better metaphor would be a human society– the input and output peripherals are organizations like the census and the post office. The various companies and stores are applications. All of these are orchestrated by laws, which work out the fundamental rules for the parts to interact.

The other way is via models. We name a few parts of the total, and establish some relationships between these parts. The relationships can sometimes be quite fuzzy. A computer consists of input, output, and a CPU. The CPU essentially does math and moves numbers around. Input takes signals from the outside world. Programs run on the CPU, by step by step doing arithmetic and moving the results around. If there was no input and output, the application wouldn’t really need an operating system. Many old style applications were responsible for memory management. But this application can’t run in the first place if a boot application doesn’t run. The boot application performs enough actions to get the computers memory in a state where it can start to run applications. Booting is called booting as a reference to the story about a guy who got himself up on a roof by pulling up on his own bootstraps. The computer’s boot routine likewise is attempting to get the computer in a state where it can execute applications, but it itself is also an application! These boot routines are part of the hardware. After the application begins to run, it needs to communicate with the input and output. These functions are normally provided by the operating system and modern OS’s also take care of a lot of memory management. Because at the instruction level, all applications look like arithmetic and moving numbers around in memory, it’s some what arbitrary to saw where applications end and where the OS functions begin, as illustrated by the lawsuit between Microsoft and the US government over bundling an internet browser into the operating system.

The point of this explanation isn’t to allow you to build your own computer or operating system. The point is to give you a mental model that doesn’t require getting a computer science degree. (And even to get that computer science degree, at some point you will need to put together some internal mental models of computers)

For further reading, see Petzolds’ “Code“, which moves from logical switches through the entire hardware stack to explain how a computer works, including the basic operating system functions. At some points, the author successfully dumbs it down, sometimes he gets bogged down in what maybe irreducibly complex. I’m personally optimistic about the ability to dumb any concept down to a point where you can get simplified and useable mental models. An example would be calculus, which started out as something only the top mathematicians could do. High school calculus textbooks have since figured out ways to dumb it down so that ordinary people can do calculus. As for proving the calculus works, which seems irreducibly complex, you can read Berlinksy’s A Tour of the Calculus, which is a sort of Calculus for literature majors– not enough calculus to build bridges or prove that it works, but enough to have a usable mental model, or at least find out it is worth studying any further.

DC Agile Software Management Book Club

Okay, I’m at it again. I’ve created a book club to replace another that had gone into hibernation. This one is going to be for these audiences:

- PM’s broadly defined (product managers, program managers, in short “bosses of software developers”)
- Tech leads (The senior developer who has no official organizational power)
- Developers on teams
- Developers who happen to have a boss

Topicwise, it will cover

- Agile software development (as opposed to SDLC, CMMI, waterfall and other process-above-all exercises)
- Software development management
- Teamwork
- Personal productivity using techniques borrowed from the world of software development management

Here is the book list:

1) Kanban, David J. Anderson $10
2) Managing the Unmanageable: Rules, Tools, and Insights for Managing Software People and Teams by Mickey W. Mantle and Ron Lichty, $17 (watch out for similarly titled book!)
3) Notes to a Software Team Leader: Growing Self Organizing Teams , Osherove, $23
4) Essential Scrum: A Practical Guide to the Most Popular Agile Process, Rubin $20
5) Team Geek: A Software Developer’s Guide to Working Well with Others Brian W. Fitzpatrick $10
6) Peopleware $17
7) Mythical Man Month, $20, $5 used
8) Software Estimation: Demystifying the Black Art , $17
9) Personal Kanban: Mapping Work | Navigating Life, Tonianne DeMaria Barry , Jim Benson, $10
10) Smart and Gets Things Done, $10
11) The Dream Team Nightmare: Boost Team Productivity Using Agile Techniques, Portia Tung (note: choose your own adventure book), $13
12) Cracking the PM Interview: How to Land a Product Manager Job in Technology, Gayle Laakmann McDowell, $10
13) Slack,Getting Past Burnout, Busywork, and the Myth of Total Efficiency, $8,/ $0.01

Writing code for technical interviews, my opinions so far

I like the idea of asking a candidate to write code during an interview, usually I ask a really simple question like FizzBuzz and I tell them what the mod operator is. I doing give a hoot if a candidate knows the mod operator, you rarely use it in line of business applications. I what to know if they can combine a loop and an if block in a reasonable way. If they can, maybe they are incompetent. If they can’t, they are incompetent. This isn’t a silver bullet, the completion of any single programming exercise doesn’t say that much about how they will be able to cope with the technical and social dysfunctions of your organization, which can be more important than mere technical competencies.

What people have asked me to do
Create a database, tables, a stored procedure for read and insert, and create a UI to do the insert and read. If I remember correctly, it was for a Personnel table. The trick was, this one one of about 4 programming exercises for a 2 hour exam. After doing it once, I realized, you could only do it in the time limits if you chose drag and drop techniques like SqlDataSource, and other techniques that leaned on Visual Studio’s code generation. Except all of those techniques (strongly typed XSDs, DataSet driven database development, SqlDataSource that binds to a UI Grid with no middle component) are out of fashion, deprecated and considered harmful for production code. And to do it in 30 minutes, you’d have to have practiced in advanced. If they wanted me to do a code Kata and for it to look as smooth as a on-stage conference demo, I’d have to practice like conference presenters do.

I didn’t get that job and the recruiter said the client rejected a whole batch of developers.

Next job interview. I got a multiplechoice test, and then was told to create a db, create a table, populate it with sample data, write two views (or a stored procedure that used subqueries). I could use the internet. But only 1 instance of sql server out of 3 on the machine were running. They didn’t give me a connection string. I switch to Ms-Access. But because of MS-Office’s restrictions, the guest account could not load featuers that involved VBA, so no QueryDefs. I almost got a gridview and sqldatasouce up and running, but without a SQL Designer, I was writing SQL by hand. SSMS was also broken because the trial license had expired. I ran out of time to prove my SQL would work.

(And there was malware on the machine that injected ads on all sites and then involuntarily redirected you away from a page after a few minutes, so visiting MSDN to check syntax, which they explicitly permitted, was borked until I disabled the malware add on in chrome. I notified them about it and they said it was a junker laptop and they didn’t care)

I was also told to write a REST ready WCF service. Speaking from experience, to set that up and prove it works takes a day. Anyone can right click and create a SCV file and add some attributes. But to demonstrate that it all works you need to:

- Machine generate a ServiceModel section of web.config. Save it off to a separate file so as it evolves you can easily revert back to a working ServiceModel section.
- Create a console app for the host and client. Some bindings aren’t easily testible with a Cassinni or IIS hosted service.
- Create tests that separate failures of the underlying code from the service. Also make sure you have a way to test the basic binding so you can differentiate between misconfigured advanced bindings from other errors.
- Create a System.Diagnostics section to turn on and off the WCF trace.
- Since this is a rest API
- Create a template for the Jquery service call, which is about 50 lines of code (most of those lines are success, failure and options settings), but it’s the same for most service calls.
- Verify that the JSON serializes and deserializes the way you want it. You may need to pass the data as string and use JSON.stringfy/parse to deal with the JSON
- Configure IIS to respond to PUT and DELETE as as well as GET and POST.
- Verify that routing and URL are working and that a variety of plausible templates don’t route to the wrong place.
- If this was a real world application, a similar amount of effort, equal to the effort described so far will be spent trying to get your organization to punch holes in the firewalls, to get an exotic single sign on server to talk to your app and to figure out why the services work on machine a, b and c but not d, e, and f.

Instead of conveying my knowledge of that, I was able to convey, with my stupid incomplete SVC file, that I was a rather dim witted developer who was probably faking it.

Interview Zen
This was on the website — Solve a programming problem with a screen recorder running, but do so in a browser text box without intellisense or unit test tools. This is sort of unfair. So I solved it in Visual Studio and commented out the dumb lines of code I wrote so they could see how it evolved. This was the most fair test so far, but since I wasn’t going to do this on billable hours, I had to wait until a quiet period on the weekend. Take home programming quizzes will select for people who are not currently working. But people who aren’t currently working and have lots of spare time are also signaling that they aren’t the best candidates! (Or it signals they have slack time at their current job, which is fine, or it signals they don’t give a hoot about productivity at their current job, which is not) So doing good on a take home quiz, is a mixed message.

A Modest Proposal
If you want to ask something more advanced than FizzBuzz, then let them take the test home, let them use all available tools, let them use their. If you are worried they will just get answers from StackOverflow, make the test hard enough to require programming talent even if you were to post the questions on StackOverflow.

HTML 5 Web Storage

Web development is the constant struggle to manage state, state that is constantly disappearing because, HTTP is stateless. We are all now experts in using hidden fields (viewstate), cookies, query strings, and server side session. Now we have one more option, HTML5 web storage:

With shims, anyone can use it now on all browsers:

Security-wise, it not especially secure. You can’t store secret data here, it is public to the user and any malicious code on the machine. To safely encrypt, you have to encrypt on server and send back to the server to decrypt. This save the cost of sending the data in a cookie for every single request, but the client can’t manipulate it.

You have to make sure you don’t share your domain with other applications. So if your shared hosting also shares the same domain, then all apps share the same local storage.

The data in local storage can be tampered with, so it is the equivalent of user input. Which gave me this idea:

Never ask the user anything twice.
Wouldn’t it be interesting to have everything the user told you stored for recall? Store the users last 100 searches. So you’ve asked the user for their address. Store it locally and re-use that instead of round tripping to the server. What this seems to address most closely is the sort of problems that ASP.NET Profile addresses. Profile is sort of a bad name– it is a durable, strongly typed session. It was supposed to be a place to store things like, the user’s preferred font size, preferred language and other UI settings. Since they are irrelevant to the app’s domain (say selling books), the data can be stored somewhere where it is unlinked to anything else.

And the last scenario is going to be organization specific– in some development teams, get a new table is major hurdle. So you begin to look for every trick to avoid having to write to the database- from memory stored data to file stored data to local web storage. So lets say your user needs a data snapshot– data will be stored locally, processed locally but not sent back to the server (on account of tamper risks). Instead of creating a snapshot table, and going through a lengthy dev cycle to get those tables and procs created, we can use Web storage.

Anyhow, just an idea. I haven’t even written any sample code.

Cross Database Support

ADO.NET tried really hard to solve the cross database support problem. And the 2.0 version (or so), with System.Data.Common namespace does a pretty good job. But when I tried to support SQL and MS-Access, here is what I ran into:

Connection string management is a pain. If you are configuring an app to support MS-SQL and MS-Access (for a library app, in my case a hit counter), you need up to 6 connection strings:
1) Oledb Access – Because this is the old Cross-DB API of choice
2) ODBC Access – Because Oledb is deprecated & the new cross-DB API of choice
3) SQL Oledb – Same template, different provider
4) Native SQL – Some things have to be done natively, such as bulk import.

I need something more than a connection string builder, I need a connection string converter. Once I have the SQL native version, I should get the OleDB version and the ODBC version for free.

Next– ADO.NET doesn’t make any effort to convert the SQL text to and from one dialect to another, not even for parameters. So I write this code.

Cross DB When You Can, Native When you Have To
Some application features just require really fast inserts. For MS-SQL that means bulk copy. For MS-Access, that means single statement batches and a carefully chosen connection string. The System.Data.Common namespace lets you use factories that return either OleDB or native, but once they are created it is one or the other. What I wish there was, was a systematic way of the code checking for a feature and if it has it, use it, if it doesn’t fall back. Obviously this sort feature testing could be a real paint to write for some features, but for things like, say, stored procedures, why wouldn’t it be hard to check for stored proc support and when it exists, create a temp or perm stored proc to execute a command instead of just raw sql? I haven’t really figured out a way to implement this feature.

Are you Really Cross DB Compatible?
Of course I am. After every compile, I stop and test against all 14 database providers & configurations. Yeah right. If the application isn’t writing to the DB right now, I’m not testing it. So after I got MS-Access working, I got SQL working. MS-Access support broke. Then I got MS-Access going again. Then they both worked. Then I added a new feature with MS-SQL as the dev target. Then MS-Access broke. And so on.

ADO.NET executes one command against one database. What I need to prove that I have cross DB support is “mulit-cast”. Each command needs to be executed against two or more different databases to prove that the code works with all providers. And this creates a possible interesting feature of data-tier mirroring, a feature that usually requires a DBA to carefully set it up and depends on a provider’s specific characteristics. With multicast, you can do a heterogeneous mirror– write to a really fast but unreliable datastore and also write to a really slow but reliable datastore.

I plan to implement multi-cast next.

Sites I am migrating

I created them in spare time and they add up after a while:

.NET Efforts – .NET – A static mirror I’m hosting for jan Pije – a front end to a word generation tool – Helps generate linguistic interlinear gloss formatting – A .NET wiki that has some stale info about how to be a locavore in the DC area.

PHP Efforts – A directory of language resources in DC. – Not sure what to do with this. It is a wiki right now. – a content site that is essentially a blog post about using twitter for foreign language learning – a landing page I used for a google ads campaign for my icelandic meetup.

And that is about it.

Customizations I used with Elmah

Elmah isn’t especially secure if assume the error log itself has already been breached. Even if it hasn’t been breeched, sometimes Elmah logs things that the administrator doesn’t want to know, like other people’s passwords.

There are some reliability issues too.

1) Don’t log sensitive data.
- Some data is well known, e.g. HTML headers
- Some data is not well known, textboxes were you enter your password
- Viewstate for the above
2) Don’t refer to DLLs that won’t exist, for fear that dynamic compilation will fail due to a reference that can’t be found. For example the sqlite. I understand why the main project is set up this way though– the goal was to minimize the number assemblies distributed and still support lots of databases. This could also be a non-issue. Assembly resolution, for me, has always been black magic.
3) Override Email to use Apps config, insted of Elmahs config sections in the ErrorMailModule. I don’t like doubled config settings, where my app has a setting and so does the component.
4) Use Apps role system and PrincipalPermission to restrict display to certain roles
- Add PrinciplalPermissions to all classes that view things (but not classes that log things), see end for a list. If you don’t trust your server admins to keep from messing up the web.config, you can put the role checks right into the code: This set worked for me.
5) Stengthen XSS protections.
Change Mask. and HttpUtility.HtmlEncode to AntiXss.HtmlEncode. This creates a dependency on either the AnitXss library or .NET 4.0.
6) Add CDATA to javascript blocks
7) Switch to READ UNCOMMITTED. The error log must not cause errors (i.e. deadlocking)
8) When error log gets really large, it has to be rolled over and truncated to prevent locking issues. This at least was a problem in SQL 2000 and I think SQL 2005.

List of classes that could use a security attribute, should you choose such a strategy.

AboutPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDetailPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorDigestRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorHtmlPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorJsonHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogDownloadHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPage.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorLogPageFactory.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorRssHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]
ErrorXmlHandler.cs [PrincipalPermission(SecurityAction.Demand, Role = "Admin")]