Thursday, December 20, 2007

Keeping it Simple

I posted this on worse than failure's sidebar because I just had to share it, but I'm putting it here in my diary for keeps:

I'm a hack looking for a new job. Hack that I am, even I know that simplicity is a laudable goal in software development. I was quite amused to see one poster proudly declare, "Our software is among the most complex ever created". That is some accomplishment! Not sure how long google cache link will last.

This brings to mind some quotes from a couple luminaries:

  • UNIX is simple. It just takes a genius to understand its simplicity - Dennis Ritchie
  • There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. - C.A.R. Hoare
  • Increasingly, people seem to misinterpret complexity as sophistication, which is baffling---the incomprehensible should cause suspicion rather than admiration. Possibly this trend results from a mistaken belief that using a somewhat mysterious device confers an aura of power on the user. - Niklaus Wirth
  • The belief that complex systems require armies of designers and programmers is wrong. A system that is not understood in its entirety, or at least to a significant degree of detail by a single individual, should probably not be built. - Niklaus Wirth
  • Inside every large program, there is a small program trying to get out. - C.A.R. Hoare

I can imagine the laughter that would generated if I included as a bullet point on my resume, "My software is among the most complex ever created".

Thursday, December 06, 2007

Object Oriented Programming and Functional Programming as inverse of each other

While listening to John Harrop on .NET Rocks!, I noted something he said which seems quite illuminating to me.

I'm going to paraphrase here, but this is my distillation. OO and FP are basically inverse to one another:
  • OO takes a problem and breaks it down by actors. You end up portioning functionality into all the various methods of the classes. If a group of objects all need to implement the same function, you put that in an interface.
  • FP takes a problem and breaks it down by actions. Then you model your actors (the class hierarchy from OO) with a single data type (as in a tree). You pattern match over the data, and functions do different things based on the match - the type of actor.
On oversimplification to be sure, but it sure seemed to turn on a light bulb in my head. BTW, I suspect that F# has a very bright future. I can foresee even 80%ers like me making use of it given a problem that lends itself to FP. Especially since it can play with other .NET assemblies written using traditional OO languages.

Iterative solution to Towers of Hanoi

When I was going through SICP, one of the early assignments was to come up with an iterative solution (rather than the traditional recursive one) to the classic Towers of Hanoi puzzle.

The first thing to understand (something I didn't before going through that chapter) is that "iterative" doesn't mean that it doesn't make use of recursive functions. A recursive solution is one that relies on the previous executions of some function; an iterative one does not. So an iterative solution can take the towers in any legal state and complete the puzzle (regardless of whether it makes recursive calls); a recursive one cannot. The challenge was to discover some local rule that would determine the next move.

It took me awhile to discover one. After examining the only other example I could find, I realize that mine is probably naive (story of my life). Nonetheless, I figured I'd post it since it does work and I would imagine may be of interest to someone else.

The rule is two-fold (Consider that the rings are numbered from smallest = 1 to largest = total count of rings):
  1. Odd / even rings always travel in opposite directions. The direction each ring travels varies with the odd/even count of total rings. If you have four total rings, then:
    • Odd rings: next move is always to the left (circling around to the other side when no more remain to the left).
    • Even rings: next move is always to the right.
  2. When deciding which piece to move next, scan the tops of each pile and move the largest ring which may legally move following the above rule.

Friday, November 09, 2007

Shades of Laziness

I'm currently maintaining an EOL system. As soon as I found out that the system was EOL, my development approach took a 180 degree turn. No longer do I have to think of the long term ramifications of my code. I no longer have any reason to refactor. Indeed, I would be irresponsible to do so since refactoring always comes with the risk of introducing new bugs and I would be adding to the workload of the business analysts (who are also the QA engineers) since they are doing the double duty of working with this old system and transitioning to the new.
I even came across some support for my position after the fact (see "Delaying Development Expense").

So now I'm free to be a lazy developer: throw code in the UI layer, add small hacks where I would usually refactor to allow for proper addition of new functionality; simply do the quickest, easiest thing. It's amazing how much less time this requires than "proper" development. Of course, if the system were to stick around, ever more time would be required. This kind of lazy is different than the good kind of laziness in a developer; I've traded one lazy for another.

One thing that is actually liberating is the fact that I no longer groan at all the ancient bad architecture in the system that I formerly had to work around. Architecture is completely irrelevant; no given architecture is better than another at this stage - it's no longer a beast that I must fight.

Liberating though it may be, I think I'd rather have my old lazy back.

Wednesday, October 31, 2007

Fun with Scheme

As I've come across the time, I've watched some of the videos (and read the book) of "Structure and Interpretation of Computer Programs."

I've been using Petite Chez Scheme along with Textpad for my little learning exercises (some of the other implementations of Scheme seemed rather difficult to get working on windows).

It's amazing how little there is in common between programming with a lisp language and mainstream object oriented code monkeying. Sure, C# has anonymous methods, but the fact that they're available doesn't change much (although who knows how far the DLR will take mainstream .Net developers down the dynamic language path).

In one of the first lectures, Gerald Jay Sussman challenges the students to come up with an iterative rather than the commonly taught recursive solution to the famous "towers of hanoi" puzzle. It turned out to be quite a little challenge for me; both discovering the algorithm and translating it into code the Scheme way.

The first thing that I had to struggle with was (of course) all the parens. Absolutely everything is inside pairs of parentheses. I don't really understand the exact reason for this (it somehow enables easy creation of programs using declarative data), but one thing is for sure - it's really easy to learn the syntax of the language (and there isn't a ton of crazy tricks as in C++ whereby on might use the language for years before discovering some). Making sure your paren pairs are matched up takes a bit of effort. It seemed like I needed to transform into a human compiler in order to write a function with much length, but after awhile it's not so bad. Conforming to the indenting conventions makes it easier, but it sure seems to make it difficult to cut and paste code or insert a comment at the right spot.

The next thing that threw me was the fact that there are no reference parameters in Scheme; they are always by value. At one point I was searching all around trying to figure out how to pass parameters by reference, and couldn't find a way before I finally figured out that there is no way. I'm pretty sure it's purposeful: functions produce no side effects.

The last main hurdle to overcome is the fact that you're not supposed to declare and set variables as a rule. Hell, you can't pass the suckers to functions, so what's the point anyway? I learned that one the hard way. I created my entire program (managing to test each function as I went along) only to discover that the last link was impossible. My practice of using variables in the mainstream way painted me into a corner. Instead of declaring variables, you create let blocks with the variable's value declared at the beginning of the block. The language help says that let blocks are just another form of lambdas which is pretty cool (everything is functional).

Here was the easy part: using lists. There are some funky things about lists that I don't quite understand, but one thing was easy: stuffing whatever I wanted in there. I wanted a string to represent the name of each tower to be stored along with the integer values of the rings on that tower. With OO, you'd have to either define a type for such list or do some casting of objects coming out of a generic collection, but neither is necessary in Scheme. The same would hold true with any dynamic language, of course.

Overall, it was a pretty fun exercise. It'd be neat if I could use functional programming at work. With the DLR, that just might be possible (I could at least write test code or internal utilities or something).

Wednesday, September 26, 2007

Complex isn't cool

I was IMing a coworker today talking about the ASP page/application life cycle. I made a common comment of mine, that the Internet is nothing but a terrible hack of a way to deliver applications. Then I commented, "web programming is so darn complicated" to which he responded, "yeah, ain't it great?"
While I think I can understand his basic point, this is something that was written about recently at worse than failure. While that post received lots of negative comments, there was no disagreement that developers often complicate things unnecessarily, which appeals to our engineering drive, and that often a better solution is the more mundane.
When I was in school I marveled at how complicated my code was. All that does this! Since it was my brain child (and only I could wrap my brain around it) there was a certain intellectual satisfaction involved.
As developers, we should all be on the lookout for this. The system gains nothing from it, and the next guy (whether ourself or some other monkey) stands to lose much (time, productivity, etc.).
It is sort of neat that a gazillion things come together to allow for killer apps on the Internet, but I would be very hip to a better solution (a standardized way to deliver smart clients or something). I'm not currently working on web apps, but when I do I always groan at the mountains of work that I'll have to put towards debugging the platform.
As I've already written (but can't be repeated often enough), simplicity (or reduction of complexity) is the most important goal in development.

Wednesday, July 18, 2007

Shades of Development

I just recently interviewed with a small company that needs someone to be the person who:
  • Maintains programs which patch together the pieces of their enterprise, e.g.; do some processing between the Internet order and the ERP system, write reports which draw on the ERP system, etc...
  • Identifies (or refines when others identify) and creates process improvement solutions.
  • Helps to create / refine content to the web site.
  • Does other sundry bric-a-brac.
Now this seems to be a good opportunity:
  • The people seem great.
  • The company is small and growing fast.
  • I would be able to call the shots.
  • I would be getting in at the ground floor.
  • Productivity is inversely proportional to team size (with the probable exception that 2 are better than 1).
It does, however, illustrate shades of development. One reason I'm code wannabe is that I often come across examples of the work of people who may be called "computer scientists". Or at least what they're doing is something orders of magnitude more compelling and or useful than what the average business developer does.

This position is not one for the computer scientist; nor should it be. The business has a specific gap that needs filling - they need someone to apply the correct amount of lubrication at all the correct points to keep the enterprise at optimum operational capacity. I frankly think I would be very good at this.

One thing they made clear: things move fast, there would be a great amount of context switching, and they want someone amenable to as many interruptions as the business generates. They don't want someone who is a "head down coder with the office door shut". Once again, this is perfectly reasonable, but it's not without drawbacks from the standpoint of a purist (in this case purist being a favorable term for one of those really smart guy programmers described above):
  • "Closed door programming" is to be preferred. But again, this presumes a certain type of programming. I wouldn't be creating the kinds of solutions that require intense, focused concentration (at least I don't think so).
  • Context switching is harmful. Steve McConnell's quote also comes to mind, "... programming requires more concentration than other activities. It's the reason programmers get upset about 'quick interruptions' - such interruptions are tantamount to asking a juggler to keep three balls in the air and hold your groceries at the same time." I don't think there's any doubt it is a productivity killer. In this company's environment, however, there's probably a greater benefit that outweighs it's drawbacks.
So this company's developer should be much different than many others. This introduces the range of development:
  • Corporate developers working on in house applications at BloatedCorp.
  • Rock stars creating startup Internet applications.
  • Lone developers working under unique conditions leading to killer solutions.
  • ISVs creating (formerly) shrinkwrap or (now usually) Internet apps.
  • Upper echelon programmers: those who create software for developers. Folks who work directly on .Net or Java internals or on IDEs.
  • Luminaries like Richard Stallman, Linus Torvalds, or Edsger Dijkstra.
  • Contract or independant developers: there's a whole sub range in here.
  • Gaming programmers.
  • Embedded systems programmers.
  • Academics who may contribute to practical projects.
  • Monkeys who act as typists for business analysts adding new widgets to code in a system that should be redesigned to be configured directly by business analysts. (That one is a bit specific because it's what I do now).
If I take the job, I will probably be busier than a one legged man (you know the rest), so I'll probably have to forebear studying the more C.S. intensive subjects as I am sometimes wont to do. One thing all good developers must share is the constant acquisition of new knowledge; just some more than others...

Tuesday, July 03, 2007

What's the single best rule to follow in programming?

Actually, I think there are 2 that are equally important; one that pertains to code construction, the other to design (and so it may be the single most important thing). I've recently seen these both expressed in a new light.

The frst (pertaining more to construction than the other) is brevity or code less. The first link, especially, sheds light on just how important this rule is.

The second, one which I'm going to rate as the best rule simply because it always pertains to architecture and as such should have a greater effect on the system under development, is reduce complexity.

The linked interview with software luminary Roger Sessions is the best I've heard in quite awhile. In the interview, Sessions proposes using "equivalence relations" to determine the best possible partitions in developing enterprise architecture.

You can listen to the interview for a further explanation, but I think his best points were about complexity. He demonstrates (by contrasting the huge reduction in potential states representable by a single program with 12 variables [and 6 states] to 2 programs with 6 variables each) that partitioning a system greatly reduces complexity.

Outwardly partitioned programs may not really be, however. If 2 "separate" programs (like services or whatever) share a database, they're not partitioned.

Sessions proclaims OO as the "worst partitioning technology". Now, I've read a fair amount lately in favor of FP over OO, and the arguments are pretty good. But I don't think they came close to Sessions' position against the traditional OO model - the creation of large hierarchies which merely tie everything together. He's not opposed to OO, mind you, just the typical implementation where reuse is a major goal.

"Reuse at the expense of complexity is unacceptable. Reducing complexity is much more important than reusing code; it is the most important thing."

The interview was so interesting to me that, for a moment, I felt excited about things like architecture. Then I remembered that I'm code wannabe and that I work at BloatedCorp (on the Behemoth project) and I snapped back to reality.

Friday, June 22, 2007

Who cares about code?

I'm the kind of code monkey who reads what I think are top blogs like coding horror, worse than failure, reddit (OK, it's not really a blog), and codebetter. I've read many of the programming classics like Refactoring, Pragmatic Programmer, and Mythical Man Month. As I find time, I'm going through SICP.
IOW, I strive for improvement in my "craft" (trite though that word may be).

But, other than at my first job, I haven't discovered anyone else who cares a lick about software good practice. Even if such "practices" are debatable, I used to expect that just about everybody endeavored to follow some. In interviews I've sat through, almost all the questions I've been asked have been nearly inane. I remember one interview a long time ago where one person focused on my ability to call windows API functions. Huh? I was so naive to think that they should ask me questions about my design practices or some other relevant topic.
One shop recently interviewed my and didn't ask anything about my practice, have me do any exercises, ask for a code sample; nothing. I was amazed.

I think the problem is that I glean info from the "upper echelons" of development while being employed at the bottom of the heap. I don't think I'm going to find many star coding shops posting for jobs on amazon or careerbuilder. It's not that I think I'm a star, but I'd at least like to work somewhere where such things are valued.

Another factor is that the code monkeys in these upper echelons aren't writing business logic for BloatedCorp. The shops where I've been employed during the last 4 years are building very narrow business specific systems, not crafting some innovative new system that businesses far and wide will use.

The work I'm doing now can hardly be called development. The architecture of the system is fixed, and there's no way it can (or even would be allowed to) be changed. I add new widgets (each doing pretty much the same thing); that's it. There's no ancillary technology (I just use my language's IDE); nothing at all other than scripting the rules for the new widget. It's not even a job a programmer should be doing - a business analyst should. If the system needs a change in the way it does something, I'm not the one to do it (for reasons outside my control).

IMO, this is a massive waste of my potential. But that's why I'm code wannabe. That's why I write these missives to you, diary...

Friday, May 11, 2007

Business Vs. Technical

This article at developer.* claims advice on the career path of a programmer. It suggests that the programmer needs to shift focus from technical to business specific knowledge.
Similarly, Bertrand Meyer from the esteemed ETH wrote in an IEEE Computer article, "make sure you know the business as well as the technology; that will set you apart from mere techies."

Both seem to guide the programmer away from engineering skills towards business skills. As a survival guide, my current experience seems to bear this out. The system I'm working on will be outsourced, but the business analysts will be retained. But I have a real problem with the former article especially. It's titled, "Career paths for Programmers," but IMO it should be titled "Why to give up Programming". This isn't career guidance for programmers who want to know how to make it as a programmer. This is advice for people who just want to make it, not caring how. Being a business analyst is nothing like being a programmer, generally speaking. I would never want to be a BA. Where I work, people with the BA title handle BA and QA, so it's kinda strange, but BA work in general couldn't possibly hold my attention; I would fail. There isn't anything related to problem solving or design in the BA's role. Further, I totally disagree with the quote in the article,

"he could train anyone in the technical skills he needed for a project, but finding those people with the necessary business skills to guide an IT project to success was something that could not easily be obtained".

I think the exact reverse is true. A person with good problem solving / design / logic skills either has such skills or not. They can be cultivated, but not acquired. Perhaps this person imagines a programmer / software engineer / developer as someone who just knows how to install and configure disparate technological tools. That would be a description of pure technical knowledge that anyone could learn, but programming isn't something you can just teach anyone.
Straight up business knowledge, on the other hand, could be stuffed into anyone's brain it seems to me. You can't train just anyone to be an actuary, but you can train anyone the details of your life insurance business and make them a BA.
I agree with the second article that the programmer who has the business knowledge is of more value, but this doesn't agree with the trend. I see what's happening to me as a trend; the software (technical) work is portioned out to those whose business is software. The particular business details are managed by BAs. So the software people know how to make good software. They don't devote themselves to business knowledge; they get that knowledge from BAs.
This seems natural to me. Now it may be that the software work may not have near the future that the strict business work will. But there's no way I'm going to morph into a BA. If I have to do something I hate, it surely won't be to fill my brain with details about Tonnage price of manila folders. I'd much rather be a sanitation engineer.

Thursday, May 10, 2007


Par for the course for code wannabe. But it is a good thing. After a brief search, I'd resigned myself to my current position for personal reasons; now I must get a better job.

The decision of my employer to outsource the system I maintain seems a no-brainer to me. I could speak all day about the flaws (despite the fact that it does, in fact, work). The most logical reasons are these

1) The system needed a re-write (as much as many of the current developers would disagree). It makes more sense to do it over with fresh blood.
2) The third party company already handles analogous systems for competitors. It's inevitable that we would follow suit so our customers (who are actually middle men) can come to a central location when comparing.
3) Most importantly, my employer's core business isn't software and the industry has somewhat of a reputation for crappy (mostly internal) software. It makes sense to keep the focus on those employees who deal directly with the core business and outsource software to a company who has software as it's core business.

Point #3 begs the question that I've heard (read) asked many times. How much of a focus on business knowledge (of a particular vertical market) should a code monkey devote, and how much on software development itself? Subject for another post.

Thursday, April 05, 2007

Strangely Duplicated

I think avoidance of duplication is one of the most important maxims to follow in software development. Martin Fowler wrote in Refactoring, "By eliminating duplicates, you ensure that the code says everything once and only once which is the essence of good design". I could've sworn he also wrote, "all information in a system should be represented in one place", but I can't find that quote anywhere. Anyway, I would argue that it is more than just good design; it's essential if one wants to avoid problems.

Duplication was a problem I pointed out in some code recently. A design decision was made that is quite simply mind-boggling.
Here's the scenario: The application runs desktop/web which share business objects. A huge dummy object (discussion in itself) contains all input UI values. For desktop, the UI is refreshed based on the values in the UI object after validation on that object occurs. For web, a message pops up (someone else's decision) rather than the UI being refreshed.
So the decision was made that for the desktop, we merely reload the values (after validation may change them in the dummy object) into the UI. Simple enough. For the web, we add an additional giant object that contains the before and a copy of the after values. And for every single variable in the dummy UI object that is validated (>500) there is a separate method that:

  • Caches the original value

  • Sets the new value (if necessary)

  • Stores the new and old value along with a string indicating the name of the variable.

When asked why this was done, the initial reply was merely because the web "shows a user message rather than simply changing the variable on screen". When asked why it couldn't simply compare against the values in the UI object, the reply was essentially the same ("it needs to show a message"). In addition, the coder mentioned that the new data structure (with the duplicated data) could be a place to add additional functionality (can you say "YAGNI"?).

I pointed out (rather casually) these problems:

  • Duplication of data.

  • Forces the developer to remember to include a method copying the algorithm above for every new validation rule (which I already forgot to do once, prompting the discussion).

  • The above algorithm is duplicated ad infinitum.

The response was that this is merely my opinion, but I personally think it transcends opinion. If data is ever to be duplicated, there should be a very good reason for it because duplicated data opens the door for corrupt data - you may as well count on it.
In this case, not only is there no reason, one wonders how such a solution could even be contrived.

Anyway, this highlights 2 important red flags in the software development process:

  • Data duplication.

  • Multiple methods with near duplicate code.

IMO, if you see the former, your design needs correcting. If you see the later, figure out how to combine the code into a single method. If that's not possible, your design needs correcting.

Wednesday, February 07, 2007

Teamwork and Development

Code wannabe strikes again (just changed the name of this formerly work-group oriented blog so the tone of posts are different though still software oriented).

My boss tells me what I already know; I'm not playing well with my coworker. I've failed. Well, maybe not; lemme esplain:

  • My coworker (my senior) is in another state.

  • Our personalities don't mesh.

  • She has less experience (in time and most definitely in variety) than I do.

  • Our development philosophies are polar opposites; hers is git-r-done / hack it together, mine is do it according to widely accepted good development practices (although we both think our philosophy serves our customer best).

  • She wants to continue to follow the ancient existing architecture, I want to change it wherever possible.

  • And so it goes.

Because of the above, we don't communicate much. It's not terrible; but far from optimal. My boss suggested that he may move another developer (who is much more happy-go-lucky) to my spot and me to his system (a one man show).

So on the one hand, I've failed to be a team player - that's evident. But should I have been one? That's not evident. If I'm relegated to a one-man system should I simply embrace that as an opportunity to call all my own shots and flourish as perhaps I can best, or should I consider that a step down and always strive for collaborative development?

I believe one thing about lone development - it can be very fruitful. The Mythical Man-Month and copious other writings detail the difficulties with team development and I believe that applies right down to teams of as few as 3. 2 may be a unique situation what with XP and all (or even w/o XP as long as 2 developers are on the same page and communicate well). Yet, even with 2, the rule may apply. Could this cat have done what he did with a partner? A partner would surely have only slowed him down. Maybe that's only because he's a genius, but I suspect it applies to anyone who frequently gets in the zone during development.