Next Page Prev PageMain Index Home


Evaluating New Technology
Notes on paradigm testing

When faced with a potential new paradigm the first thing you must consider is by what criterion will you judge it. It is frequently the case that using the tried-and-true technological yardsticks will give you false or highly misleading results. This is due to how technology yardsticks are created. Lets take a look at this process.

The typical development of yardstick criteria follows a predictable pattern. The criteria start as real world numbers, for example, this job runs in X hours, or this engine has Y horsepower. But these are hard numbers to generalize; they do not compare well across multiple systems and companies. They depend on too many variables to be reliable and repeatable. The solution to this problem leads to the next step. This is to find "low level" numbers that seem to be good predictors of the "real" numbers.

Here is where the yardsticks tend to get paradigm-specific. These "low level" numbers can be excellent predictors in one paradigm and terrible predictors in another. Numbers like, this job moves Xk per second from disk, or this engine has Y cubic inches. These numbers are NOT direct performance numbers. They are predictors of performance. In the case of the cubic inch, this predictor works fairly well until you look at the rotary engine. It fails completely when used with turbine engines. Both rotary and turbine engines represent different engine paradigms.

It is very important to understand the difference between these "real numbers" and the "predictors." Here lies the major problem in evaluating any potential new paradigm. Given the invalidation of the prevailing yardsticks, how can anyone ever evaluate a truly new technology? How can someone hope to develop a fair, generic, and paradigm-free yardstick? The first step is to separate the "real numbers" from the "predictors."

Toward that end, I will now present several common computing predictors. For each I will provide an explanation as to why they are truly predictors and not always valid.

1: Percent of CPU utilization,

In the current paradigm a low percentage equals higher total throughput. This is a good thing. In other paradigms, high percentages equal better throughput. The "real number" for this is total CPU time consumed, and the lower the better. Be careful because this is referring to total system CPU consumed. It is a very hard number to quantify on a system running more then one task. There is a lot of CPU overhead at the system level. A significant part of this is not reported at the single-task level.

2: I/O latency,

I/O latency, commonly called "disk wait." In the current paradigm, the lower the number the better. In other paradigms this number can very from having no effect, to being in the higher-the-better category. This wait represents the only free time the system has to work on other tasks. In some environments this is a good and necessary thing. In a parallel environment, the disk waits can be "free" as they do not consume CPU time.

3: Communication latency,

This is the wait associated with any communication task. In the current paradigm, lower is better. However, several of the current paradigm tricks used to produce lower communication latency will increase the system overhead. This has the effect of reducing total system throughput. In short, there are times where increasing this wait time will increase total throughput. In other paradigms, increasing this value's maximum can have the effect of reducing the average wait.

The only true measure of computer performance is total system throughput. Numbers like: this job, on this hardware, runs in X minutes. Or results like: this system, on this hardware, can support Y thousand users before response time goes over 1.5 seconds. Both of these examples produces an apples-to-oranges situation when comparing to any other system. This unfortunately makes any comparison tricky at best.

So how can any meaningful comparison be done at all?

To some extent this needs to be evaluated on the basis of what has meaning to you. The strategy I would recommend is as follows:

1: Identify your business make or breaks.

This will allow you to "fix" some of the variables. Each number you can fix reduces the complexity of the comparison. Be careful that you use only real world business numbers here. System numbers, "predictors" or otherwise, have no business here.

2: Clean up the fixed business values.

Once you have a set of fixed numbers, it is time to make them paradigm-free. Numbers like: from the time we receive the last update, this processing must complete in X hours, are good. Numbers like: this batch run must complete in Y hours to allow time for the index-build, are bad. Real world business numbers only.

3: Equate values that are not fixed.

Once the fixed values are set the next step is to equate as many other performance values as possible. A good way to equate hardware platforms is to use processing cost. Once the fixed-business-numbers are met the best system has the least total cost.

Figuring the cost of a CPU second can be eye-opening. The cost of a "CPU second" = ("Total system cost, including supporting sub-systems" / "Useful system life in seconds") + ("Total operational cost per year" / "Number of seconds in a year"). Once you know the cost of a CPU second you can derive the cost of running your task on each system. If you do this for both of the systems being compared, you may find one system is less expensive even when it is slower and occupies more physical boxes.

4: Remove issues that are not business based.

Internal system architectures have no place in the evaluation. What internal model a system uses IS NOT RELEVANT to how it will perform the needed business functions.

If you think that this is the case you are by definition stuck in the old paradigm. A new paradigm is by definition a new way of accomplishing some task. No large benefit can ever be found by doing the same old thing "a little better."

When you use this approach to comparing systems, you are in a position to take advantage of the really big improvements. Paradigm changes can be evaluated using this simple and sane approach. You can decide based on business facts which system can really meet your requirements best. You will finally have truly the best system for the business need.


Next Page Prev PageMain Index Home © 1998-2004 NPSI