Thursday, October 26, 2006

The Double If

This is a anti-pattern that I have started to see when people attempt to think in terms of objects, but don't think in terms of messages. The case is that you have one bit of code that will do a test and will return either a real value or null (in this case, that method is GetParentName). Then, anybody who uses GetParentName needs to either be OK with getting a null back, or needs to test to make sure that the value returned is not null. This code illustrates that anti-pattern:


class Item
{
//Return either a real string or null
string GetParentName()
{
Item parent = GetParent();
if (parent == null)
{
return null;
}
else
{
return parent.GetName();
}
}
}

void SetupTextBox(Item myItem, TextBox someTextBox)
{
string parentName = myItem.GetParentName();
if (parentName != null)
{
//TextBox::SetText(string) will throw an exception
//if it is passed a null
someTextBox.SetText(parentName);
}
}


A way to fix that is to change the class' interface. Instead of getting values from the class, let's tell the class to tell its information to somebody.


class Item
{
string TellParentNameTo(StringReceiver receiver)
{
Item parent = GetParent();
if (parent != null)
{
receiver.SetString(parent);
}
}
}

void SetupTextBox(Item myItem, TextBox someTextBox)
{
//myItem will do the best that he can do, even if he
//has a null
myItem.TellParentNameTo(someTextBox);
}


The biggest problem with the second example is that you need to have really, REALLY broad interfaces that are used everywhere. In this example, TextBox would need to be a StringReceiver. In fact, anybody that could conceivably get a string from an Item would need to be a StringReceiver. If TextBox were written so that it is a StringReceiver from the beginning, everything works great. However, suppose that StringReceiver was written later than TextBox. Either TextBox will need to get it retrofitted, or an adapter class will need to be written:


class Item
{
string TellParentNameTo(StringReceiver receiver)
{
Item parent = GetParent();
if (parent != null)
{
receiver.SetString(parent);
}
}
}

class TextBoxStringReceiverAdapter
{
TextBoxStringReceiverAdapter(TextBox textBox) {}

void SetString(string str)
{
textBox.SetText(str);
}
}

void SetupTextBox(Item myItem, TextBox someTextBox)
{
//myItem will do the best that he can do, even if he
//has a null
myItem.TellParentNameTo(
new TextBoxStringReceiverAdapter(
someTextBox)
);
}

Thursday, October 05, 2006

Concurrency

There has been a lot of talk recently about how the exponential growth in CPU power is tapering off. Almost everybody is saying that the future will be multicore processors and multiprocessor systems. A lot of consumer PCs are shipping with dual core processors. The Xbox 360 uses a custom, triple-core CPU. The PlayStation 3 will have 7 active cores. This isn't theory; it's quickly becoming fact.

Everybody who has ever tried knows that multithreaded programming is hard. The programmer is required to do a lot of mental juggling to try to imagine what the system will do. I'm not a fan of making a programmer do a lot of mental gymnastics if the computer can do a better job of it.

One of the biggest challenges in multiprocessing is in dividing an application into multiple threads. To this day, most applications run in a single thread. Even when an application has multiple threads, it's usually only when performing some lengthy operation, such as a find. If multiple threads were not used, the entire application would freeze up.

Interestingly, programmers have been dividing their programs into pieces for a long time. Instead of carrying out a process by performing 1 million discrete operations, we perform 10 high-level operations. These operations need to be carried out in order, because the output of one is used as the input to the next. Also, each of those high-level operations is implemented in terms of 15 mid-level operations, and each of those is implemented in terms of another 10 mid-level operations, and so on. These subdivisions are usually used to take what is essentially one long process and split the code for that process into multiple, bite-sized chunks that an individual programmer can understand.

Unfortunately, This doesn't help with the concurrency problem. However, a lot of programmers have started programming in an object-oriented way. Making classes is a very different process from making functions. The purpose of functions is allow the programmer to focus on a small section of code. However, the code that the programmer is working on is still sitting inside of some larger whole. The purpose of classes is to let the programmer work on a piece of the program in complete isolation. This is why a class contains both state and behavior. In fact, the class that a programmer writes for one program should be able to be dropped, without change, into a completely different program. A BlogPost class should be reusable, as long as it provides all of the behavior that everybody needs.

Good OO design is characterized by a principle that I recently learned. The principle is "Tell, don't ask". This has nothing to do with bullying your objects around. To translate, to "tell" your objects is to send messages to them. To "ask" your objects is to write them such that they cough up their internal state to whomever asks. So, the principle is really "write your objects so that they expose behavior to the outside world, but don't write them so that they expose state to the world". I'd love to go into more detail, but I don't have space right now.

If we take TDA to the extreme, we can make some radical assumptions. First of all, we can completely remove return values. Return values, I claim, are simply an artifact of the procedural days. If you need to have an object tell you something, send a callback object to him. You TELL him to TELL you something. Crazy, no? Now, you might point out that this is a lot of work for something that used to be handled by something as simple as a return value. This is a valid concern. However, my gut tells me that the callback scheme would actually simplify a lot of code. Again, I would love to go more into this topic, but I don't have space. Please just assume (as I am assuming) that callbacks are preferable to return values in a lot of cases.

So, with this mindset, we can really think of method calls on an object instead as sending messages to the object. Since the object can't send anything back (except by sending US a message), there's no guarantee that the target object will process the message right away. In fact, there's no guarantee that the target object will process the message at all. For example, when you're working on a networked application, you need to realize that nothing is ever guaranteed. You might have an object that represents a remote computer. You might send a SendPacket message to that object, but that object might not send a packet out right away. In fact, if the remote computer goes down, the packet might never get sent at all.

Most importantly, this shifts our thinking. In the procedural world, it is vital that all steps be carried out in the correct order. This is because each step will update data somewhere and return information to the caller. In the TDA world, this can never happen. Of course, SOME ordering is still needed. But that ordering can be captured in a different way. Only the significant order is enforced.

A trend that I am starting to see in systems that are designed with TDA is that they seem to be "configured" more than "programmed". I mean that main() is pretty much responsible for instantiating a bunch of instances and wiring them up correctly. Then, main() sends one message to one object and, like a bunch of dominos, eventually something comes out of some other object. In the meantime, all of the objects involved were furiously chatting. The important thing is that they were all basically dormant until a message was sent to them. Then, they did a whole lot of stuff and sent a whole lot of messages to other objects, but then they went dormant again.

So, what does all of this have to do with concurrency? Suppose that each object in the system had its own thread and message queue. The queue is pumped by the thread, and each incoming message is dispatched to the appropriate message handler. Now, when an object needs to collaborate with other objects, it not only doesn't HAVE to wait until the other object is done, it simply DOESN'T wait. It is executing in a completely different thread that may be running on a different processor (or even a different machine!). Also, since the object doesn't have to wait, it can run through all of its code very quickly and return to a dormant state. While dormant, the object's thread would not get scheduled.

What this effectively does is to turn a processor's execution cores into a simple resource, like physical ram. Just as a program does not need to decide where in physical ram to store its data (it stores its data in virtual address space, which the processor maps to physical ram), the program will ALSO not need to worry about how many threads it has.

There are so many details that I have left out. This post could turn into a whole lot of smaller posts with a lot of detail about various small parts of this scheme. But this is a decent overview. I hope that people read this and have good ideas.