Monday, September 20, 2010

The Latest Development Fads...

Hey Blogspot,

Sorry I haven't written in awhile -- I'll have something new soon. In the meantime, I had this random observation today: given that time is money (especially in the software development business), would you Bank on anecdotal evidence?

When the next new technique rolls around, will the talk in the office be about the peer-reviewed studies showing its productivity benefits, or will it all start with "has anyone heard or tried out X"?

- Bo

Monday, August 23, 2010

The Marginal Unit of Testing

Dear Blogspot,

Here's a puzzle for you. If society benefits from the availability of water, why is a cup of water not provided everywhere? Sure, they will offer you one at eating establishments, but why not at bookstores, or electronics stores? Granted water is not free, but if it is so necessary for human survival, you'd think we would demand a glass upfront at every venue, just like we demand air to breathe wherever we go.

The reason we don't has to do with the costs versus the marginal benefit. Radio Shack would need to spend capital and recurring expenses to be able to offer something that most people would not even want to consume were it offered there. Meanwhile, the demand for drinks at restaurants is extremely high, due to the complementary good of Food that is consumed there. Put simply: the marginal benefit of that next glass of water, at most venues, is well below the cost of provision. Not at every venue, but at most.

This important economic insight applies equally to an important (and recently Hallowed and Revered, thanks to T.D.D.) object such as the Unit Test. A unit test, of course, is programming code written to test the functionality of other programming code. It is distinguished from integration or functional testing precisely because of the small segments of code that are the target of the unit tests. Unit Tests are almost always written by the same programmer who wrote (or will write) the code being tested.

Like water, unit tests are important. Also like water, the provision of unit tests is not free. However, unlike water, far too many developers (especially the last 5-10 years) would be Horrified to discover even one "venue" where unit tests are not provided for. This reflects, in my humble opinion dear Blogspot, a failure to think on the margin.

Like with Water and Radio Shack, spotting the most cost-inefficient places for unit tests is pretty easy: getters and setters, code that does trivial calculations, code that generates logs for developer purposes, or code whose failure is more cheaply spotted by integration testing (such as user interfaces), etc. Below this it gets rather fuzzy -- one might argue that code that has a high call count, has numerous dependencies, is modified frequently, has failed previously, or that performs a function whose failure would be disastrous are all great candidates for unit tests.

Either way, treating Unit Tests like Water and reasoning that "because they are important, they must be equally demanded in all situations" is both fallacious (see the Fallacy of Composition & Division) and a wasteful use of resources.


Thursday, August 5, 2010

Price Shopping

Dear Blogspot,

Today I looked at web site statistics. I saw interesting data, including numbers of page views, typical page navigation flows, and similar metrics. The web developer then tied those metrics directly into how his automated web site testing tool was configured. The output from the testing tool, combined with the metrics, then directly informed the time he spent improving and optimizing specific aspects of the web site. This started me thinking about whether such statistics might be useful in directing all web developer tasks.

In markets, prices are signals of public preference. Higher prices signal higher preference, and often cause more production effort in that direction. For a free public web site, especially one dedicated to an unreleased product, there are no prices anywhere to be found! Therefore, like the apparatchik economists of the 20th century, web developers have only statistics to approximate this function.

So, does this work? Is this a good approximation? Not even close...

In the first place, while usage statistics reflect demand, the only cost incurred by the free web page viewer is their time. Suppose that an attempt was made to make the site profitable by charging the user a fixed price for each page click. The (few) remaining web users viewing habits would change considerably, and the usage statistics would be radically altered. Higher value pages would gain considerably relative to lower value pages. Users would likely use bookmarks to skip index/portal-like pages in favor of going directly to the pages they want most.

In the second place, while usage statistics aggregate access, they do not approximate the intensity of the desire for the various pages viewed. In economics, this is called price elasticity. This is why, in our hypothetical pay-per-click site, the fixed page price would cause those pages that people value below the fixed price, such as the index and portal-like pages, to lose traffic relative to other pages.

Lastly, and this goes back to a previous blog post, demand determined from site statistics can not be used to determine the profitability of spending more developer time on even the most-viewed pages. This is due to the difficulty in comparing the market-cost of a developer's time with the time spent on particular pages. Perhaps maintaining the most popular pages have developer time costs that make such work unwarranted. Perhaps the developer should spend his time instead on NEW pages. Without prices at both ends of the equation, it's impossible to say.

Web statistics are very useful things. For the purposes that this particular web developer was putting them to, they were very near perfect. However, if such direct demand signals are otherwise useless in directing web developer tasks, how much harder is it to determine the best use of the time of non-web developers, who don't even have that?


Wednesday, July 28, 2010

Wanted: VCC 9.5 For-Loop Expert, min 8yr

Dear Blogspot,

In the great span of events, it seems only yesterday that those who understood the mystical language of computers were called "Computer Programmers". Along with their fellow "Systems Administrators" and "Computer Operators", they formed the labor force behind the vital computing capital in large American corporations and governments.

My, but how times have changed.

Despite the fact that computers are in far more locations, Computer Operators have largely disappeared, due to the increasing ability of computer hardware to operate for indefinite periods without much attention. Systems Administrators have been broken into numerous specialized roles, dealing with upgrades, client software systems, security software, networking arrangements and components, monitoring, etc.

And likewise has gone the "Computer Programmer". Today, the market asks for specializations undreamed of in the past: skills with a specific programming language is minimal these days. Noone bothers to hire a C++ programmer for a PHP position. Knowledge of specific libraries, components, environments, and platforms also regularly appear on job descriptions. Even familiarity with specific development processes has entered demand.

So long ago, Adam Smith told us that "specialization is limited by the extent of the market".

The market for those who work on computers is large indeed.

Tuesday, July 20, 2010

Who will count the cost?

Dear Blogspot,

In case you didn't know, the average salary of a software engineer in the U.S. ranges from $50-$80k, not including benefits, per year. This works out to an average cost of roughly $32/hour + benefits.

The comparative advantage of the software engineer is in the creation of algorithms written in particular languages, for particular platforms, using particular tools, which will cause a computer to behave in a manner desired. These actions, in turn, generate further institutional knowledge possessed only by those engineers, which then gives those engineers comparative advantage in the production of all desired secondary goods which require that knowledge as an input, such as certain kinds of technical documentation, operating procedure documentation, manuals, reports, and inputs to other company processes.

In a software company, this is the source of serious problems. If a software engineer is the input to ALL outputs, this will drive the value of those engineers to the company upwards, which in turn drives the cost of generating those secondary outputs upwards as well. An example of this would not simply be a developer who finds his time consumed with software maintenance years after the software was written, but who also finds himself in meetings discussing reports about that software instead of working on new algorithms or fixing old ones.

A cost-conscious company will (or should) be motivated to reduce those costs. Under previous development methodologies (most notably Waterfall), this cost was managed by dividing the work of todays engineers into design and implementation, with design handled by software analysists, and implementation by programmers. However, cost and efficiency problems with this arrangement eventually led to the consolidation of those positions under new methodologies (most notably Agile). In the end, it was discovered that it was more efficient to consolidate software knowledge into an engineer than to bear the constant intercommunication costs created when the tasks were divided, especially when the primary input into both systems, namely business requirements, were constantly changing or being updated.

However, it still leaves the problem of the high cost of secondary goods. This is especially a problem in larger companies, where the demander and consumer of secondary goods from engineers are increasingly detached from those who are counting the costs of the process as a whole. For instance, if a manager in department A wants to include engineers from department B in a meeting to benefit department A, there are usually no systemic checks to prevent a waste of resources, when the benefits to B may be marginal, while the costs to A are considerable.

In markets, such wastes are checked by the price system. If a resource is more productive for use A than for use B, then this will be reflected in the high profit margin for A relative to B, resulting in more of the resource being utilized by the former.

Also, when the relative value of secondary goods is open to interpretation, and without guidance, there is insufficient motivation for discovering ways to "produce more" of different aspects of the engineers knowledge by spreading it around, which would otherwise lower the cost of some secondary goods versus others.

So, Blogspot, what is to be done? Perhaps a commenter has a suggestion for this systemic dilemma.

Thursday, July 1, 2010

What's This For?

Dear Blogspot,

Some great things just seem to always go together: bright flowers and lawnmowers, bugs and magnifying glasses, anything and a bored cat. Why then is it so uncommon (but not unheard of) to find software engineering and economics paired up?

I aim to be fixing this.

As you know, Blogspot, my name is Bo Zimmerman, and I'm a software engineer in Austin, Texas. I realize my enthusiasm for economics is no substitute for a carefully studied degree, so be sure to weigh that carefully with anything posted on your hallowed hard drives.

Sincerely,
Bo Zimmerman