Friday, October 28, 2005


Data-Centre Issues Centre Stage

The continuing struggle to deal with processing demand & space/heat/power constraints is getting more air-time. The constant drive in the finance industry for grid/blade farms is not helping either.

Choice quotes from the article:
Power requirements of the top 10% of data centers are growing at over 20%
According to IT infrastructure vendor, West Kingston, R.I.-based American Power Conversion Corp., the total cost of ownership for a rack of servers is between $80,000 to $150,000 per rack, and power consumption accounts for 20% of that cost.
The conclusion:
The old ways of throwing equipment at IT problems -- more air conditioning units, servers, UPS units -- is going to have to be revisited. And IT pros are going to be asked to find more efficient ways to increase reliability and computing capacity.
[emphasis mine]

Of course, my recommendation would be to move such compute requirements away from their Van Neumann shackles, and go with reconfigurable computing. A large FPGA runs at about 15W compared to about 100W for a chunky CPU, given that and the performance difference of more than 100x, you can easily see the potential for massive compute densities in the data centre. As such, these next generation grids are going to drive analytics capabilities to the next level.

Tuesday, October 25, 2005


software pricing

OK, OK, I'm catching up on months of ignored blog reading (largely due to travelling..), but seeing Dave's link to this article on software pricing, certainly gives food for thought. It would be interesting to see a vendor use these rules to write warrants on their services.

One point in the article that needs a further look is the option to buy an upgrade -- doesn't the future software value need to be stripped of maintenance and further upgrade value? Not sure.


Joel get's bitten

Joel got worked up by a conference he attended. His rant's on the comment page are quite amusing. He calls this type of behaviour "architecture astronauts", I'd previously called it "consultant speak" due to source that I encountered it at.

When working at a large bank, we used to have weekly meetings with an external architecture consultant. Sometimes (disturbingly frequently) he'd say a phrase or a whole diatribe (I lost contact with the grammar, tracking when a sentence or concept finished), and we'd just look blankly at each other while trying to decipher what was said. I ended up just asking him to repeat it in a simpler way, it was unfathomable. I'm not acutally sure if anyone understood that stuff -- do some people just let it slide completely? Some must.

Monday, October 24, 2005


Loose Coupling / Service Browser

Steve suggests I read the thread on Ted's blog -- another mega comment thread.

OK, in this one -- which is mainly centred on loose coupling -- I pretty much agree with all of Michi's points. I don't think that a restricted number of verbs provides better scalability per se, however there are other reasons to bend toward the REST approach. Having a convention whereby it is simple (and by this i mean *trivial*, no parsing of content/body) for intermediaries (ie proxies) to determine the cachability of a message is useful -- of course HTTP has several other mechanisms specifically for this.

The other major reasons to use REST style, imho, are debugging are portability. Debugging because you can use a browser to view the state easily -- provided all context can be provided in the URL. Portability in terms of, I can send URLs easily in an email, no explanation is necessary; I can send a link to a user / another team and no setup or extra software is needed. This, while not directly needed is *very* useful in an enterprise / cross team scenario.

What would make the WS world much easier would be a service browser -- this may exist, I've been out of the world for a litle while. The service browser would have as it's goal, making access of a WS as easy as a web page in a normal browser. Most toolkits will build a test web page on the server side for this type of testing, however:
This needn't be a fat client, most can be done with a web site (except maybe the authentication one, depending on various domain issues) though a firefox plugin would be ideal. In order to do some multi-stage scenarios, some scripting (e4x, say) would be cool. A full scripting engine / command line env is not required, just enough to do if statements and xpath read/writes. The ability to send someone a link / email would be great, it's a bit more complicated but could be done (either using a web site to store it or package up the whole interaction into a single page, say, like S5/XOXO presentations). Does this exist?

Saturday, October 22, 2005


Loose Coupling: CORBA vs WS

Middleware Matters: CORBA did what?

There's a pretty good discussion going on the comments thread of this posting, generally around the topic of loose coupling. Of course, I think what causes most of the need for discussion on this is the fact that people don't define loose coupling -- the requirements aren't set, I may put that in my next post. Bits will be eluded to below though.

I think Michi Henning's comments provide much of meat (though a a couple of others are included); I will respond to a bunch of points here...

WS is no more loosely coupled than CORBA. WS proponents claim that loose coupling is achieved by using XML, because XML can be parsed without a priori knowledge of the contents of a message

While I agree with the first statement, XML does make it easier, since the non-type based usages (ie non-statically bound types) are easier to cope with / debug. The human readable aspect is important -- debugging two CORBA impl's that aren't working is a real mare -- I have to admit not really using CORBA in anger since 2.1/2.2, so things may have changed, but I haven't kept up.

CORBA is typically used for communication among application components that are developed by the same team, but is not used by companies to offer a public remote API that anyone could call

This means loose coupling is generally not an issue then -- or at least it's in no way the beast that it is for cross team / org / company.

But WSDL ends up creating type definitions that are just as tightly-coupled as IDL ones. (And everyone seems to agree that WSDL is important.) But, where does that leave loose coupling? We have XML at the protocol level, which is loose, and we have WSDL at the application level, which is not loose.

True -- though it's more accepted to use looser types in XML than DynAny's in CORBA. Actually it's not quite WSDL that's the major problem, XSD is the stickler -- it doesn't work for validation in a loosely coupled system. Dave Orchard has written endlessly about this one.

versioning and loose coupling are not about just being able to send additional data, but also about changing existing data, operations, parameters, and exceptions. Moreover, real-world versioning is sometimes not about changing interfaces or data types but about changing *behavior*: it is common for someone to want to change the behavior of an operation for a new version of the system, but without changing any data types or operation signatures

Changing behaviour of a single component is the *goal* of many SOA / loose systems. The trick is doing it without breaking anything else.

Loose coupling is about dealing with application-specific data types and interfaces and whether it's possible to gracefully evolve these over time


multiple interfaces are a far better approach

True, however there's an overhead, particularly in a statically bound system (ie one node) of maintaining multiple interfaces. It's a necessary approach but you want to avoid it for every possible change if you can. XSD / IDL is pretty painful here, actually RMI/Java can be much more forgiving.

trying to put loose coupling into the encoding of the data (i.e. using XML) is at too low a level precisely because loose coupling is *semantic* issue, not a syntactic one
Debug debug debug is about all I can say.
Most application programmers don't want yo implement dynamic data binding logic. They want a serializeable object with access methods
True, however, depending on the interface size / change rate and flexibility of the binding this is simply not worth the effort. Actually, XMLBeans is pretty good at tolerating XML 'noise' robustly. Are there any similar ORBs?

Technologically speaking, CORBA offers a robust and complete stack with a clear and well-documented approach on designing and implementing distributed solutions.
Only disadvantage : tool/server providers were not really interested in providing true interoperability...

The firewall issue is a fake argument.

Robust: yes. Well-doc'd: yes for RPC, no for loose coupled systems. Interop: bad, trying to get Visibroker or Orbix to talk across more than a sub-version of the same vendor's product was generally horrible. It remains to be seen how this shapes up in the WS world.

Firewalls were a big problem with CORBA, both lack of good HTTP transports and the issue of embedding IP addresses inside the IIOP packets, which then get VERY confused when going through the many NAT layers of DMZs etc. Again, lack of visibility due to binary formats and ease of translation of IP addresses made CORBA miserable to work with in NAT environments.

Loose coupling will never happen, at least not before we have application components with reasoning capabilities.

It happens today. Most people will not see it until they admit it's more than a technical feat however. It's (probably mostly) a management / control cost trade-off issue.

Appologies for lack of accurate attribution, if anyone complains I'll go back and fix it. Read the whole thread, there's a bunch of good stuff in there.

Friday, October 21, 2005


New company, new look

As of October 10, I have a new company -- ngGrid. It's focussed on bringing the ever expanding FPGA sector to the finance industry. I'll no doubt be blogging about this stuff more often.

I've had various discussions about the fact that it's called next generation grid - particularly, since an FPGA is not a grid. I chose the name based on the fact that while not a grid, FPGAs can be used to solve some of the same problems -- the performance for (certain) financial calculations is quite staggering.

Shells, VMs, query languages.

Just seen this, which is pretty damn cool. It seems from the article and the previous one that it links to, that jhat is built with a Javascript engine -- presumably the one that's built into Mustang. This brings a few thoughts..
On the topic of shells and xQL languages, good auto complete makes these things much easier to use -- i seem to remember C# 3.0 / C omega has changed the order of the clauses to make it or similar, in order to get this. While that looks mad to anyone who's spent too many hours inside databases, it does kinda make sense.

Tuesday, October 18, 2005

OPML Rant.

Here. Quite funny, I thought, not intended to be, but just the rant-ish-ness...(ed: please learn english).

Monday, October 17, 2005


Hmm. Dion has a mare at Gatwick with Aerolineas Argentina. I hate it when airlines are so bad -- due to the cost, availability and pre-booking means by the time a problem is encountered (ie when you are actually trying to fly) you're effectively in a monopoly unless you can afford to discard the ticket entirely and buy from another carrier (at another time usually).

Oddly, I've flown with Aerolineas loads this year, and they've been the most efficient and reasonable carrier I've encountered -- for example their domestic tickets are changeable (including economy) up until right before the flight and have a few other nice features (though, in order to get reasonable prices, i think you need to buy them in argentina -- i spoke to several people who paid a fortune (like 3 times the price, months in advance) for the same tickets buying from abroad.

On the flip side, I had an entirely rotten time with British Airways on a heavily delayed flight on the same trip... I may talk about that one later, it'll make my blood boil if I do it now and I need to get back to work.

Wednesday, October 12, 2005

BEAWorld 2005, London.

Day one (probably will skip day two).

Good talk, entertaining, and highlighting the fact that SOA is a business process change more than a technology change.

BEA AquaFoo

Platform, abstraction, blah blah. Good quote: "security" is a service... would like to see that one explained. Authentication and policies for authorisation can be, but overall security?

Once again, there were no guidelines about how to actually achieve an SOA. No concrete examples of before and after. No hints as to *why* the tools enable SOA -- partly because no-one ever really defines what they mean, other than droning on about these mystical services. They tell you that you want them, but won't help you define what is or isn't a well behaved service.

Panel Discussion
Various chats about ROI and how to convince the business that *this* time, the technology has clothes.

Steve Jones, Capgemini, makes a bucnh of good points about the mind-set shift on how to design these systems. Finally something concrete.

Azul Systems
Talk of the day. Azul have built a processing virutalisation system based around their own processor (24 core, little floating point, low power). Java only right now, but essentially provides a normal JVM which runs on the original OS but simply acts as an IO proxy to a virtualised VM running on the Azul platform. Interesting angle and looks promising -- provided the money side is sane. Ultimately the IO will not be proxied but will go direct to/from Azul machine. One to watch.

Went to a couple other talks, but they were too boring to comment on. Sorry.

Monday, October 10, 2005

This is desperately needed for desktop Java. Mustang please!

Having a good security model in Java is great, but if we can't leverage it except in an all or nothing manner, then it's value is drastically reduced. I wonder how easy it is to write a security manager to provide this behaviour... hmm. Lazy web, anyone?

Saturday, October 08, 2005

Been away, but that'll have to wait for another post...

Reading Udell, and his quote from here:

[DCOM, RMI, CORBA] are about a platform. CORBA was 95 percent API, 5 percent interoperability. Web services is zero API and 100 percent interoperability.
me thinks this is a good point. Of course, the corrollary is probably true.

While CORBA et el succeeded at the platform elements excellently, and interop wasn't the best between vendors (or indeed between versions from the same vendor I*ahem*ona*ahem), will web services really succeed at the platform elements? Will WS interop ever really gain traction at the transaction, RM, routing elements? It's hard enough to get two or three vendors to work correctly at the API level (say for transactions) without bringing other peoples' network stack into the game.

Jon also mentions:
...cross-enterprise Web services is a marginal use case -- the real value is in "getting different technology systems to interoperate within the same enterprise."

Given the tendency of enterprises to absorb other enterprises into themselves, the inter- versus intra-enterprise distinction is yet another fuzzy boundary. But assuming we can draw that line somewhere, isn't the higher cohesion afforded by a "platform" just the sort of efficiency that Sessions recommends exploiting within a system boundary?

I've worked in places where the intra-company divisions are more separated than inter-company ones -- it's largely down to budget lines and motivation from up on high. In that type of place, a consistent platform is not a luxury you can hope for. I tend to think, that in practice, many systems will be stuck to together like they always have, with bits of custom code -- the real difference will be going forward it will be XML and in-process glue. In general there will always be something that doesn't quite work for your system. The platforms that will succeed will be that ones that make it possible to apply glue (whether by providing hook points, scripting environments, pluggable endpoints etc). otherwise, we'll be writing Perl / Java/ whatever intermediaries with lots of hacky transforms etc to make things work. At least with web services, we can read the messages without needing to unpack binary streams...

Going back to the cross-enterprise point, I think this goes along with the idea that the farther apart the systems are -- in terms of control / platform / etc, NOT location -- the less you want to rely on layers of interop between them. Simply minimise the number of moving parts.

This is where the REST crowd have an advantage, by minimising the ambition, they are more likely to succeed.

This page is powered by Blogger. Isn't yours?