martes dic 13, 2005

Super Humanes, Mini minimals and idiot users.

I agree with Linus: KDE is more usable

If you think your users are idiots, only idiots will use it. I don't use Gnome, because in striving to be simple, it has long since reached the point where it simply doesn't do what I need it to do.
Linus Torvalds, at the gnome usability mailing list

And, sorry by that, I partially agree with Linus. Gnome is either too simple (mini minimal) or too complex (super humane). It's not a reasonable user interface. I've been using KDE for several years (with Mandrake and with Kubuntu) and I have to admit that every new release is just an improvement in user experience. I fully recommend KDE too. (And there's no intention to begin a Gnome vs. KDE war. If you like Gnome then go ahead and have fun, but please respect my opinion and let me express it. Otherwise, if you're a Gnome developer, you may find good hints to improve your GUIs here).

The fact is that Linus' comment may as well be applied to the distinction between Minimal and Humane interfaces that caused all these comments in the web lately. As well as Minimal interfaces and Humane interfaces I would include two more extreme variations of API design: The "Mini Minimal" approach and the "Super Humane" one.

Mini Minimal Interfaces, by example

Let's try to see an example of "Mini Minimal" interface design. "Mini Minimal" interfaces are an extreme of "Minimal Interfaces" and their defect is being so simple that they become useless. To see a real-life example of "Mini Minimal" interface design we will need "The Gimp v 2.2". (Beware that you'll have to install tons of weird libraries. In my Ubuntu 5.04 box this software requires linux-sound-base !??).

Once you get to get "The Gimp" v 2.2 then create a new image and try to save it as a PNG image. This is the "Save" dialog I'm getting:

Save dialog, simple

This is an example of dialogphilia, a disease that makes you re-create, re-design new file dialogs on every single release of your product. The fact is that the dialog is just too simple for me to use because it doesn't allow me to properly select a folder to save the image into. So, to select a folder on where to save the image, I am forced to click on that little, tiny small white arrow at the left of "Buscar otras carpetas" (search another folder).

From my point of view this is a nice example of "Mini Minimal" interfaces. Interfaces that trying to be simple become simpler, and fail to meet user expectations and, as a consequence, lack functionality and are just not usable.

The fact is that the dialog violates two usability laws: don't hide/show things to the user and don't create new weird widgets (such as that little tiny arrow there). But, wait, there's even more. Let's keep on clicking and investigating. Once you click on that weird tiny white arrow here's what you get:

Save dialog, simple

And this is just another example of dialogphilia. It's the most weird dialog I've ever seen to select a folder. I understand that it's a good idea to innovate new ways to do things. Gnome may be a good place for somebody to experiment innovative dialogs. I appreciate this. I probably like it from a technical point of view. But the fact is that I just want to save a PNG image. I'm not a guinea pig to experiment dialogphilia with. I'm a frustrated user. That's all.

So I manage to select the folder. Wow. That was hard. Let's click on the "Save" button. Wait. There's even more!!! We're reaching the...

... Super Humane Interface

Once you manage to click that "Save" button you get this last example of dialogphilia:

Save dialog, simple

Well, the fact is that the dialog is too wide to fit the screen (!!??) so here's the part of it that's falling through the right part of my monitor (note that the "OK" button is not visible without scrolling the dialog !?):

Save dialog, simple

And this is the perfect example of a Super Humane Interface. An interface with too many buttons, too many controls, too many widgets that are just... useless!! Trying to be too much humane makes the interface just plain useless.

The fact is that this "Save as PNG" option dialog is just out of place. (By the way, the whole process has to be repeated, including the painful folder selection, for every single PNG image you want to save). I'd prefer a little tiny tiny small "yellow" arrow in the previous dialog to be presented with the options for the PNG format. Or even move all the PNG stuff into a "Preferences Dialog" (Gnome developers tend to love preferences, see this message at the Gnome usability mailing list for screenshot examples). But I don't want to set them now! I don't mind if I want to save the PNG file with an "Entrelazado (Adam 7)" format or not. I don't even know what that means. This dialog is getting in my way.

So providing too much methods, buttons, dialogs and options trying to make things simpler makes things just more complex.

As I said in my previous entry, desigining user interfaces is a hard task. Humane and Minimal interfaces may be the way to go but both have their extremes. Keeping things under control, using our common sense to avoid extremes such as Super Human or Mini Minimal is probably the way to go.

After all, it's not about Humane Interfaces or Minimal Interfaces. As Aristotle said, "Virtue lies in the middle".

Happy API designing,
Antonio

jueves dic 08, 2005

Simple KISSes and surprises

Wow. Elliotte's criticism to the so called "Humane Interface" API design "school of thought" has spawn an Internet wide discussion (meme?) about simplicity in API design.

So I decided to review yesterday's entry, and go take a look at the Internet, to try to see what makes an API a good API. At least for me. And this is what I've found (and, as always, all feedback is welcome).

Surprise, surprise...

Yesterday I talked about the importance of following idioms and language conventions. Today a principle backs up this idea. The so called Principle of minimum surprise (or "Principle of least astonishment") states it clearly:

In user interface design, programming language design, and ergonomics, the principle (or rule) of least astonishment (or surprise) states that, when two elements of an interface conflict or are ambiguous, the behaviour should be that which will least surprise the human user or programmer at the time the conflict arises, because the least surprising behavior will usually be the correct one.
Principle of minimum surprise

And the fact is that Ruby's array API is all but "least astonishing" to a Java programmer. I couldn't stop laughing while reading Elliotte's review of Ruby's array api!!. (And, well, all respect due to Ruby programmers, but please understand that Elliotte has a point here, and he's so funny I couldn't resist).

Another example is C++ operator overloading. If you wrongly overload the assignation operator then your implementation fails to follow the principle of minimum surprise, and you can end up with a class that makes memory leaks. That's one of the reasons why I wouldn't like to see operator overloading in Java! ;-)

So, to summarize, point one is that an API should follow the minimum surprise principle.

Minimal interfaces: what is reasonable?

Martin Fowler introduces the Minimal Interface style of API design. He defines "minimal" as the smallest reasonable set of methods that will do the job.

And I think that's a nice definition. "smallest" and "reasonable" balance each other quite well and give the definition the part of ambiguity it needs.

Take, for instance, lists. The list abstract data type requires just one constructor and four operations. Just that and you can do whatever operation you want with lists (including "first()" and "last()"). So the "smallest set of methods that will do the job" for a list is four (and a constructor).

Well, of course a list interface with just 4+1 methods would result spartan to everybody. That's where "reasonable" fits in nicely in the definition (making the definition subjective and, thus, unusable, by the way ;-) ).

So, what is reasonable? Can we improve Martin's definition a little bit? What are the forces, the non functional requirements a Minimal Interface must meet so as to be a "Minimal Interface" with a smallest set of "reasonable" operations?

Reasonable means "as usual"

This is, a reasonable interface (to me) is one that follows the minimum surprise principle. For a Java list a reasonable interface (to me) is one that uses "size()" to get the number of elements, and that uses indexes starting from 0. And that uses Iterators to iterate over elements of the list. That's the principle of minimum surprise applied to (Java) lists. At least for me.

But a reasonable interface is one that keeps backwards compatibility. What would happen if Sun decided to remove the "Enumeration" interface in Java API? Wouldn't that break lots of existing programs out there?

Reasonable means easy to implement and extend

I wouldn't like to extend a list interface with 78 methods. That's nightmare. As Martin states it a minimal interface "reduces the burden on implementers". (Well, at least for languages such as Java, where non abstract classes implementing an interface have to implement \*all\* methods of the interface).

Reasonable means easy to learn

And again Martin introduces the importance of the learning curve in API design. Note, as well, that the more an API follows the Minimum Surprise Principle the easier the API is to learn. I don't need to go take a look at the java.util.Map API to know that I can retrieve the size of the map by invoking "size()" on it. (and yes, I admit Java API has evolved a lot and we still suffer from things such as length(), getLength() and size() burden in Java APIs).

Reasonable means easy to test

That's another important point, of course. APIs are to be maintained, and the smaller the public API the less things to test, right?

Reasonable means encapsulated, modular, providing information hiding

Which are basic principles to follow, aren't they? As Elliotte points out, is a Ruby array ...

... Is it a List? Is it a stack? Is it a map? It's three, three, three data structures in one!

And, please, note that I wouldn't like to make any critics to Ruby people, but I think the example servers quite well as a counter-example of the encapsulation and modularity. Wouldn't it be better to have separate Map and Stack classes? Is that reasonable? I think so. So, for me, a reasonable minimum interface is one that follows the principles of encapsulation, modularity and information hiding. The ones that make an array an array, and not a map, a ternary search trie or the mother of all data structures ;-)

Reasonable means ... minimal!!

APIs don't usually change. Changing a public API is asking for trouble to all people using it (remember those deprecated classes we have in APIs?). It's probably easier to augment a public API than to reduce it. Adding some new methods to a public API is much easier than removing methods people is already using. And that's why a reasonable API is a small one. The smaller the less prone to be reduced in the future, right?

To summarize

So, to summarize, I like minimal interfaces too. "Reasonable" minimal interfaces. They're easier to use, easier to learn and easier to maintain. Minimal interfaces can be augmented without too much hassle. But minimal interfaces are probably harder to design. You cannot measure the qualities of an API until you release it out. And then it may be too late to change it to make it even better. All we can probably do is use some good old common sense.

Sorry for this long post. And happy API desigining,
Antonio

miércoles dic 07, 2005

Harold, Martin and kisses

Elliotte vs. Martin: Elliotte wins

Yesterday Elliotte Rusty Harold showed disagreement with a recent Martin Fowler article. Elliotte basically says (at least this is what I understand) that APIs should be simple to use, but should not be filled with useless methods, such as "List.last()" and "List.first()" and things like that. Quoting:

More buttons/methods does not make an object more powerful or more humane, quite the opposite in fact. Simplicity is a virtue. Smaller is better.

(read more)

Elliotte says that a class with 78 methods is difficult to use. Why would you want a method such as "List.first()" when you can do "List.get(0)"?

I've been reading Martin's article too. He basically says that he "leans to" the "Humane interface", this is, having lots of methods in a class, but he admits that this is harder.

So, what's the best way to follow the KISS principle? What's better? Having an API with a "reasonable" minimum of methods or have an API with lots of methods, making things easy to use?

This is a tough question, and I think nobody can have the final answer. As human beings all of we have preferences, so our answers will always be subjective.

Anyway I think Albert Einstein was right when he said "Make it as simple as possible, but not simpler". That citation applies quite well to the discussion. Following Albert Einstein's advice would lead us to stick with Elliotte's opinion. We don't need an API that makes things simpler than simple. Simple is simple enough. Simpler is more complex.

Simple or simpler? Kisses are idioms, right?

So what makes an API simple? What simpler? Let's try to see an example. Take, for instance, the DOM API for XML Parsing.

Elliotte was not very satisfied with that API. And I also stick with his opinion: the DOM API is not very easy to use. So he decided to build a new API himself:

XOM is designed to be easy to learn and easy to use. It works very straight-forwardly, and has a very shallow learning curve. Assuming you're already familiar with XML, you should be able to get up and running with XOM very quickly.

learn about XOM

So, why is that? Why is DOM not "easy to use"?

Let's make Elliotte answer that for us. At his slide number 9 we can read:

[... the DOM API ...] Just plain ugly; does not adhere to Java programming conventions.
What's wrong with the XML APIs (and how to fix them)

So Elliotte answered our question two or three years ago. A simple API is one that adheres to Java programming conventions (whether those conventions are right or not we don't mind, after all they're just conventions).

A simple API makes indexes start from 0. And run until X.size()-1. That's simple.

So having a List API that contains a "first()" is not simple. It's just "simpler". Having a "last()" method is "simpler" (and thus more complex) than using the simple "get( size()-1 )".

But, for a Ruby programmer, having "first()" and "last()" is really simple. The other way round is more difficult. Each programming language has its idioms, and an API that follows those idioms is a simple API. An API that tends to be "simpler" and make things against the idioms is an API that makes things more difficult. As Elliotte says the more the buttons the bigger the difficulty.

So that's just my opinion: I like simple things, but not "simpler" things (because those are more complex to maintain).

Make things as simple as possible, but not simpler.

Now, I should apply that myself and forget about thinking of using annotations for everything. But, wait, are annotations a new Java idiom? (Mmm, that's more food for thought during the next holidays ;-)).

Any (simple) opinions out there?

Cheers, ;-)
Antonio

viernes oct 28, 2005

To annotate or not: that is the question

It seems there is a diversity of opinions regarding Java 5 annotations. So I decided to write a little bit on my (current) thoughts on this. Maybe you want to express your ideas too. I would appreciate that. That would help me make my mind on this.

Annotations, by accident

The fact is that I happened to read my favourite Scheme book lately and I found this by accident:

"The contrast between function and procedure is a reflection of the general distinction between describing properties of things and describing how to do things, or, as it is sometimes referred to, the distinction between declarative knowledge and imperative knowledge."
Structure and Interpretation of Computer Programs 2nd. Edition, page 22. (Nice read)

And I thought this could say it all about annotations. I mean: since annotations contain declarative knowledge of things they should be used to describe the properties of things (but not how things work).

Let's rewrite this. What does "things" mean? What can I annotate in Java? Well, it seems we can annotate annotations themselves, packages, classes (interfaces), constructors, methods, method parameters, class fields and local variables.

That's a lot! ;-)

So, rewriting the paragraph above, we end up with:

Annotations usage idea I: describing properties

Annotations should be used to describe the properties of annotations, packages, classes (interfaces), constructors, method parameters, class fields and local variables.

So that's what I understood about annotations, too. And this is a Good Thing to have in Java. JAXB2 annotations and EJB 3.0 annotations conform to this rule: they contain a description of the properties of things (properties of entity beans, for instance).

Annotations and that little footnote down there

So I kept on reading my favourite book and happened to read that little footnote down there. I mean this one. Let me quote a little bit:

In a related vein, an important current area in programming-language design is the exploration of so-called very high-level languages, in which one actually programs in terms of declarative statements. The idea is to make interpreters sophisticated enough so that, given ``what is'' knowledge specified by the programmer, they can generate ``how to'' knowledge automatically. This cannot be done in general, but there are important areas where progress has been made.

Structure and intepretation of Computer Programs 2nd. Edition, footnote # 20 on page 22

Oh, ah! This is very interesting! It seems you can use declarative programming to specify how things work too!!. So you can use annotations to "generate "how to" knowledge automatically". (and this is where the apt annotation tool fits in).

Well, this is nothing new if you think of it. Take, for instance, the Spring framework. In Spring (I'm not a Spring expert so I may be wrong here) you build things declaratively. You end up programming things with XML. Indicating how things work. Which beans are "SimpleUrlHandlerMapping" and which beans are "org.springframework.jms.support.converter".

So annotations (or Spring declarations, or XDoclet) can be used to generate "how to" knowledge automatically. This is very important for those of us responsible for defining frameworks. By using annotations the "how to" is generated automatically. You can include best practices and best of breed, high quality, field-tested patterns automatically. This reduces development time, reduces the probability to do things badly and thus augments the probability of success in your software.

So we end up with another idea:

Annotations usage idea II: automatic code generation

Annotations can be used to generate code automatically, so as to automatically include (bugs? ;-)) best practices in your software for you.

But this automatic code generation is a double-edge swiss knife. What happens if the automatically generated code is a bad one? Will the automatically generated code correctly address your non functional requirements? Is it scalable? Is it performant? Can it be distributed in a cluster of application servers? How easily will it be maintained within two years from now?

This is, if you automatically generate code with annotations, is it being correctly generated??...

... To annotate or not to annotate: that (whether the generated code is good or not) is the question!!

Have a good weekend,
Antonio

jueves jul 21, 2005

Frameworkitis and reuse

Erich Gamma:

Frameworkitis is the disease that a framework wants to do too much for you or it does it in a way that you don't want but you can't change it.

We prefer many small frameworks over one heavyweight framework.

(Read more at Erich Gamma on Flexibility and Reuse)

Well, I fully agree with this. Frameworkitis is a widely spread disease. It happens everywhere, to almost everybody. I assume it's somewhat difficult to understand you're suffering from it and to recover as well. Let me try to quickly point out some points I think could be considered as symptoms of frameworkitis.

Symptoms:

  • Versionitis. This symptom appears when you spend lots of time tracking out different versions of libraries for your projects. Sometimes you discover you cannot upgrade to a newer version of a framework you're using because the libraries the framework depends on make your stuff break. Take, for instance, XML parser incompatibilities.
  • Documentosis. This symptom appears when you realize you don't have the complete documentation of the framework you're using, and this is so old that the documentation cannot be found in the Internet. A variation of this symptom happens when a new release of the framework does not include a set of things that have changed; and you have to test your whole project again when upgrading versions.
  • Contracting difficulties. This symptom appears when somebody at your project just dissappears and you have to substitute her. Then you realize you have to hire somebody with extensive knowledge on tons of frameworks and libraries, and the cost for contracting a candidate just goes high and high.
  • Downloaditis. This symptom appears when you try to deliver client applications to end users and you realize you depend on a huge amount of things, so the end users have to download tons of stuff (If they ever do).

Frameworkitis is usually difficult to cure. For ongoing projects all you can probably do is to use something to manage complexity, such as maven (by the way, maven is moving to 2.0, so if you depend on 1.0 you may suffer versionitis or documentosis ;-) )

The best thing to deal with frameworkitis is, probably, to follow Joel Spolky's advice. This is, if the functionality you're seeking in frameworks is a core business one then just build it yourself, whatever it takes. As Joel suggests: "Find the dependencies -- and eliminate them".

Finally, if you have to use a framework, then I'd suggest building some sort of software abstraction layers (SAL) that depend on the framework, and make the rest of the project depend on these SALs. As you isolate dependencies in specific parts of your software you isolate the places where frameworkitis may appear. And then you're safer.

I'm writing this because a customer of ours is building a NetBeans module and wanted to reuse some parts of NetBeans, generating dependencies with other modules. Of course it may be a good idea (reuse is always a good idea), but you have to remember to balance reuse with dependencies, after all, nothing comes for free (but, of course, Solaris ;-) ).

I'm back from holidays (always short, you know ;-) ), so I'll post about the promised SwingWorker stuff (and about frameworkfobia, the disease I suffer), as soon as possible.

Keep on swinging,
Antonio

miércoles mar 02, 2005

Overpatterning Pattern

Oh my, just another "OverPattern"/"PseudoPattern" around. It seems somebody around wishes to do weird stuff with J2EE.

The latest one I've seen wants to dynamically change stateless into stateful behaviour. Automagically! Wow! You know, stateful behaviour is such an expensive thing that you would want to keep it under strict control, isn't it?

Session information replicates between application servers in clusters (unless you have a special mechanism such as we have in our Sun Java System Application Server Enterprise Editition). And you wouldn't want to replicate, say, a "DataSet", would you?

People keep making things complex. Maybe it's just human nature. Who knows...

So I decided to do a pattern myself. There we go. Comments, please:

OverPatterning Pattern (also known as "False Pattern Pattern" or "Don't waste your time reading nonsense Pattern" ;-) )

Problem

While surfing the Internet you see explanations of design techniques that may not be true patterns. How do you decide if the information is a pattern or not?

Context

Software Engineers tend to think they're more smart if they say they've found/designed/created/invented a new Design Pattern (take, for instance, me ;-) ). Since the Internet, as we know it today, allows everybody to post weird/wrong stuff (such as this blog entry, for instance) there is a very low signal-to-noise ratio and it's difficult and or time consuming to decide if a web page contains a useful pattern or not.

Forces

You want to quickly determine if an article contains a Pattern or not, so as not to waste time reading weird things.

Solution

Go see the definition of a Pattern. Learn it. If the page does not fit the definition then you don't have to waste your time and you can instead go see some ice on Mars.

Resulting Context

Applying this pattern results in interesting pages about mars. ;-)

Cheers,
Antonio

lunes feb 07, 2005

The importance of being earliest

Test often, test early, they say. Want to see a proof of that? Then go go take a look at the latest Javaworld article. Ivan Small takes us through a good example of why load-testing and stress-testing is so important. Through the article Ivan makes a J2EE application improve dramatically by taking a look at some typical problems (synchronization and memory consumption). A well explained article and a good lesson for us all:

the earlier you test the sooner you go to bed!
;-)

martes sep 28, 2004

One persistence to rule them all?

A single persistence engine for both J2SE and J2EE!!

Joining efforts and reducing confussion

That's great news. I think lots of people have been waiting for this to happen. Joining efforts between JDO and EJB persistence models is indeed great news. And reduces confussion.

By the way, it would be nice to see a filesystem-based (not relational database based) implementation. I'd love one of those!

Other choices still available

Of course we have Hibernate as well. Do you know what really makes Hibernate so good?

It wasn't designed by a committee. 
Hibernate grew out of experience in an actual software project.
It doesn't try to be all things to all geeks and so 
succeeds in doing one thing well.
(sic)

And I fully agree.

So basically we have choices. Personally I'll review the drafts of the new proposal as soon as available, so as to help have as many and as good persistance options as possible for my applications.

And, yes, That's cool too. ;-)

martes sep 21, 2004

Monos, Gorillas, Suses and Mandrakes

Background

So you know I run a Mandrake 9.2 at home (Mandrake 9.1 at my laptop). I have installed Mandrake 10 recently, but I don't really like it. I have been using Linux since kernel 1.2.13 (a long time ago, I have the sources somewhere) and have evolved from the initial distributions to slackware to redhat to debian to Mandrake.

I have installed lots of Linux distributions, using the most weird mechanisms you can imagine. I remember installing a Debian distribution on a laptop using PLIP (Parallel Port IP), and having to delete all /usr/doc while installing in order to make it fit in a 40Mb HD (i386 box). It made a nice print server for an Apple network. We could emulate an expensive Apple Color Laser Printer using a cheap HP Inkjet!

I really liked Debian. It was (it is) a rock-solid distribution. It took me a long time to move to Mandrake. But, you know, with age you become sort of lazy for keeping in sync with latest stuff. Mandrake is a nice compromise between stability, security and ease of use. I imagine your preferred distribution is cool as well (after all that's why you're using it, right?) but no, thanks, I'm all set with Mandrake.

The problem

So you can call me a Linux guru. At least a Linux-installation guru. Or so I was once upon a time. But not anymore.

I am disappointed: I can't install Novell's mono (aka Ximian's) in Mandrake. I cannot find the latest binaries for Mandrake. It seems I cannot even compile it.

The solution

So I assume I'll have to install some sort of Novell-like distribution, say Novell's Suse in order to be able to run Mono.

Fragmentation?

I don't think the Linux developer community will be fragmented by the huge amount of Linux distributions out there. The linux kernel is well-driven by Linus Torvalds. He's intelligent enough to keep tight control on the kernel so it doesn't fragment (yeah, same here with Sun and Java). But I'm afraid the Linux user community will fragment. In fact is starting to happen right now. If you're a Mandrake user you're in trouble for running mono. At least this is happening to me. Why can't I run mono on Mandrake? I don't think Novell wants it to run only on Suse/RedHat, right?

Help!

So if you happen to know how to run mono on Mandrake (9.2, thanks) then just please let me know how to make it happen by sending me an email.

Thanks in advance!!

domingo jun 20, 2004

Least Effort, Conway's and Manageability

Law of Least Effort

Someone has asked me what I understand by the "Law of Least Effort" or "Law of Minimum Effort". There we go:

When do \*you\* work better? When are \*you\* most productive? Is it when stressed? When relaxed?

It seems our brain follows the Law of Least Effort to solve problems. That's probably why we give our best when our brain energy can be focused in a problem, when nothing in the outer world disturbs us.

This status where your brain can concentrate lots of energy in a single problem is usually know as the "flow" or "the zone". This phenomenon is well known among developers, but I think applies well to everybody else! ;-)

As Joel expresses it:

We all know that knowledge workers work best by getting into "flow", also known as being "in the zone", where they are fully concentrated on their work and fully tuned out of their environment.(taken from Joel on Software)

For more on this you might want to take a look at: Undertanding the Psychology of Programming.

Conway's Law

Organizational Pattern: Make sure the organization is compatible with the product architecture.

It seems Conway's Lay is important when you build an architecture or when you reorganize your company.These measures can work to reduce the risk and increase the effectiveness of your enterprise architecture efforts.

I think Conway's Law is just a corollary of the Law of Least Effort. When a company is badly organized, or when the software architecture does not fit the company structure, then tensions arise and you don't allow people to enter "flow". You don't allow them to concentrate. You don't allow them to give their best.

I have seen this myself. I remember a company where the IT department was separated into the Analysts Group, the Database Group and the Development Group. This was a big project. These divisions were well established and could not be changed. We designed an architecture where a group of developers was responsible for dealing with Analysts (thus enforcing communication with them). We made the Database Group responsible for designing and tuning the SQL queries inside our DAOs. Communication within the three groups was boosted. Everybody knew what to do. Law of Minimum Effort. Flow. The project was a success.

Manageability

Managing people and architecturing software systems are dependant things. The Human Factor, the People building the system, are as important as choosing between JDO, EJBQL or Hibernate. Don't just focus on the technical part. See what people thinks and what their skills are. Maximize those. Apply the Law of Minimum Effort.

As Alistair Cockburn (of Use Case Analysis world fame) puts it:

Yet it is clear that social issues affect the software architecture in ways that the good architect takes into account.

lunes jun 14, 2004

OverPatterning, OverJ2EEing, OverUMLing

What I have seen travelling all around and solving customer problems with Java is that when you get confused, or you don't understand, you tend to overdo things.

Take, for instance, design patterns.

There's a clear definition of a pattern by Jim Coplien here.

What I like most of that definition is that it clearly states that a pattern is a proven concept that that solves a problem when the solution is not clear. Patterns include as well hints on when to apply them.

That's great, because it saves you lots of time. You just read the pattern, see if the boundary conditions of your problem are similar to those described and then apply the pattern to solve the problem. You don't have to suffer all the trouble of a bad solution. It's some sort of good recipe that saves you lots of time and money.

I have noticed that people tend to apply \*all\* the patterns when confused. J2EE patterns, for instance. That's not the goal. You don't have to use \*all\* patterns all the time. That's what I call overpatterning and, I think, it's origins are confussion.

I have noticed as well that people tend to overdo UML. They usually get stuck in writting UML docs and deliverables and forget about the problem. UML should be seen as a tool to express yourself, to communicate and to document. There's little point in documenting a little-bit of a software-design. You don't usually need the details. A higher level vision is usually enough to be understood. You can let the details for later. There's a nice article titled UML Fever everybody should read. I usually read it and verify I'm not suffering UML fever.

When people gets confused about J2EE they tend to overdo it too. Over J2EE-ing I call it. Then you see architecures that do not fit the problem, such as using web interfaces when rich clients would be a better solution. Or using EJBs for everything, even when not needed at all. You don't need to use \*all\* J2EE components to build a J2EE compliant application. Rich client applications are J2EE too. Applications without EJBs are J2EE too.

Law of conservation of energy. Law of minimum effort. The KISS principle.

Relax. You don't have to use all the features!

About

swinger

Search

Archives
« abril 2014
lunmarmiéjueviesábdom
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Hoy