Friday, September 26, 2008

Programming Proverbs

I was listening to an On Point show last night about proverbial wisdom and it struck me how much software engineering is overwhelmed with proverbs. Someone even wrote a book of them, Programming Proverbs

Here are a few I can think of:

  • KISS (Keep It Simple Stupid)
  • DRY (Don't Repeat Yourself)
  • When in doubt leave it out.
  • Choose two: Good, Fast, Cheap
  • There's no silver bullet
  • “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away” - Antoine de Saint ExupĂ©ry
  • No Broken Windows

People don't often take proverbs seriously. But I find them extremely useful when writing software. I don't think I'm the only person who finds them helpful. How often do you think about the DRY principle, or KISS, when writing software? These proverbs have invaded the language of software engineering. I think their value suggests something about either the nature of our industry, or the current state of it. I wonder if other industries are riddled with proverbs.

I found a good list http://www.multicians.org/thvv/proverbs.html.

Which big ones did I miss?

Updates

Here are a few more from Eric
  • YAGNI (you ain't gonna need it)
  • PICNIC (problem in chair, not in computer
  • BAD (behaves as designed)

Saturday, September 20, 2008

Measuring Things

Everyone loves to measure things. Eric told me the other day of a story he heard at NFJS about how Henry Ford publicly measured employees performance of producing I-Beams. The mere process of public measuring increased the number of I-Beams produced (I looked around on the internet, couldn't find any references).

Recently at work we started playing the Hudson Continuous Integration Game. In this "game" there is a public record of points. Your check-ins can net you points or loose you points. The rules that we play with are:

  • -10 points for breaking a build
  • 0 points for breaking a build that already was broken
  • +1 points for doing a build with no failures (unstable builds gives no points)
  • -1 points for each new test failures
  • +1 points for each new test that passes
  • +3 points for removing a warning, TODO, or fixing findbugs errors
  • -3 points for checking in a warning, TODO, or creating findbugs errors

Each month we reset the scores. This is the third month we're doing this. The top three get prizes (a toy from the dollar store to display proudly on their desk), the looser also get a toy, the cockroach of shame. We're half way though this month, the top three all have 200+ points (at the moment I'm #2), then the point count drops off considerably. I believe #4 has 100 points, and the person in last place has -1 (note: 25% of the people playing are fulltime developers, the other 75% are scientists who do a little development, but everyone plays).

This is far from perfect measurement of performance, but I have to tell you the fear of public ridicule for having low points (or just my competitive nature) has certainly made me go right back and fix any findbugs errors, and implement TODO's rather than just leave them there. It's kind of neat on a personal level, but it also has had an effect on our team as well. It's encouraged other people to cleanup their warnings and fix easy problems and it's started a lot of discussions about good coding practice (I think this has been the most valuable thing it's done). The most controversial rule is loosing 3 points for checking in a TODO.

There are a number of people who feel that checking in a TODO shouldn't loose you any points, that it will encourage people to just not mark things as TODO when they should be. I can totally see this point. On the other hand, no matter how much I hate to loose points if I have 2-3 things in a month that really are TODOs and I don't have time to implement the feature right then, I'm okay loosing 6-9 points... I created work by checking in. I should get dinged. In my eyes this encourages people to not check-in if they're going to create work for other people.

The whole process has been very fun. And it's started a number of conversations about development with people who weren't talking about it so much. I'm currently measuring the success of the game by how much people are talking about it. This month that measurement is at 104, I hope next month the success of the game is 150.

Thursday, September 18, 2008

Google Maps Selenium Test Suite

Google Maps open sourced their Selenium Test suite. I peaked at a few of their examples. They're very very small tests, which is really cool. Anyway, check out their wiki http://code.google.com/p/gmaps-api-issues/wiki/SeleniumTests.

It must be nice testing a UI that doesn't require massive amounts of state to get to the feature to test. Maybe testing in general will push app development towards a world where parts/features of apps require smaller and smaller amounts of state setup to work. I know testing has pushed my code in that direction. But what would it even mean for an app?

Monday, September 8, 2008

Antlr Testing

Just started using a JUnit style Antlr Testing package... called Antlr Testing, it's great, it let's you test your Lexer, Parser, and Treewalker. The documentation is light and good enough, and it's pretty easy to use (at least on a small grammar).

The website does a better job describing it than I could: http://antlr-testing.sourceforge.net/

Saturday, September 6, 2008

Antlr LL

So in fact the grammar I wrote yesterday is an LL(2) grammar. Antlr is an LL(*) parser, but you're allowed to specify how many tokens you want it to look ahead (I'm still unclear if Antlr is smart enough to take something that is LL(K) and generate source code that takes advantage of it).

LL(K) means for a given language the parser has to look forward up to K token to determine what rule you're using. There are parsing optimizations made if you specify the LL.

See: http://en.wikipedia.org/wiki/LL_parser for more info on LL parsers

In Antlr option { k=INT } specifies LL(INT). Three examples:

LL(1)

grammar lookAhead1;

options {
 k = 1;
}
prog : rule+;
rule : 'A' ':' 'B'+
 ;

WS : (' '|'\t'|'\r'|'\n')+ {skip();}
 ;

LL(2)

grammar lookAhead2;

options {
 k = 2;
}
prog : rule+;
rule : 'A' ':' 'A'+
 ;

WS : (' '|'\t'|'\r'|'\n')+ {skip();}
 ;

LL(3)

grammar lookAhead3;

options {
 k = 3;
}
prog : rule+;
rule : 'A' 'A' ':' 'A'+
 ;

WS : (' '|'\t'|'\r'|'\n')+ {skip();}
 ;

Friday, September 5, 2008

Antlr Rules Matching (or look ma no semicolons)

I'm doing a little more with Antlr these days, and I had made some assumptions that turn out not to be true. I had assumed that you would need clear terminations of rules (like new lines or semicolons or something), that rules couldn't overlap, but they can, and I believe this is related to the look ahead functionality of the antrl parser. Check it out:

grammar lookAheadTest;

prog : rule+;
rule : 'A' ':' 'A'+;

WS : (' '|'\t'|'\r'|'\n')+ {skip();}
 ;

What's really cool about this parser is that you can input to it "A:AAAAA:AA" and it can figure out to split that up into:

I thought I was going to get warnings about multiple execution paths, but I didn't! This makes it much easier to think about and write grammars. I have to assume that this is bad practice if building a medium to large language, but I'm only dealing with little languages, so I'm thinking it's okay for now

Thursday, September 4, 2008

New Code, DSL first design

I just started writing some new code for a project and for the first time I started with the DSL first, and I liked it.

In Martin Fowlers article http://martinfowler.com/articles/mocksArentStubs.html he talks about middle-out design, start with a feature in the domain layer and work, or top-down where you start with the UI and, work down.

For me a DSL is another way a user can interact / configure a system so it seems related to but, different than a traditional UI. I think it's a closer abstraction to the domain than the UI, but you can easily imagine anything you do in a DSL you could do in the UI. So in a sense designing DSL first is a top-down approach, but in my very limited experience it's so close to the domain model, that it really feels like a middle-out approach.

Anyway, it was a really interesting experience and let me think about what features I needed in my domain layer very easily. And because I was playing with so many syntaxes I finally settled on something that let me loosen up the design quite a bit more than I would have if I had started in the domain layer first.

 
Web Statistics