Tuesday, December 28, 2004

Christmas survived...bring on New Year...

Wow Christmas just snuck up on me and then dashed past. Been back to the old homestead this past weekend, and now here in the big smoke again. It's been a great Christmas, my nephew's first (he drooled a lot) and my friends' 26th (they got drunk and drooled a lot).

So next on the traditional things-to-do list is New Years which will be a quiet one in with the, infinitely, better half. With that in mind here comes the ubiquitous "Year in Review" post...

Work

So we'll start with work, and there has been a lot going on, in some respects, not too much in others. I might get a project to completion in the New Year, but will not have made one this year, that kinda sucks, but then I've worked on some exiting things.

We have just got a new CEO, and early in 2005 there will be more new faces around. We also said goodbye to some great people. It's going to be a year of change in many ways.

The development team did manage to get our technology open sourced successfully (http://www.openharmonise.org), that was very satisfying.

Personally

While work was a bit of a mixed bag, personally I had an amazing year. I ended 2003 and started 2004 by telling anyone within earshot that 2004 was going to be the best year ever, and although none of them would believe me it did turn out that way.

I met and fell completely for the above mentioned "better half".

I traveled, an amazing feat for those that don't know me. Until 18 months ago I had taken one holiday in 4 years (for my brother-in-law's stag party). Since then I have been back to Munich for the Oktoberfest, to Paris for the first time. I had a great time on an abortive attempt at camping in Wales during gale season. A great friend and I went to Rome together, making me appreciate his friendship even more. Finally my girlfriend and I recently went to New York together, during which trip I realised that not only are we great together in our relationship but we are traveling soul mates.

My predictions for a great 2004 extended to those around me as well, and that also worked out pretty well.

  • One friend finally made the move to London and is now in a great flat, a good job and generally having a good time, no matter what he says sometimes.
  • Another friend found a new flat in an area he loves and got the promotion that he richly deserved but refused to believe he would get.
  • Friend number 3 (in no particular order) moved in with his girlfriend and seems to be happier than I've ever seen him.
  • One friend has a book deal for 2006-11!
  • Another friend was published for the first time.
  • And finally, although there were other great things that happened, we all went out and had a great time over Christmas.
On and on to 2005...

One of my friends is now championing 2005 as the next great year, and I am willing to go with that, although I think that for me it will be a hard year. I am going to be focusing on moving forward with a lot of things that are going to require a lot of work from me, but I truly believe that they are worth it. People that know me well will know what I'm talking about, those that don't will find out in due course.

I am looking forward to the next year, I hope it turns out well for all of you. This week I am going to be working damn hard to get my current project out of the door and then celebrate the New Year so I will be back with you all next week. Till then...

Thursday, December 23, 2004

Lessons Learned #4: Using XSLT key function for lookups...

I use XSLT all the time, more and more in fact. Obviously we use it as part of the publishing engine of Open Harmonise, but there are far more uses that we put it to. Mostly these are for data preparation or presentation. For example a goal in many of our projects is to end up with XML data sets that our clients can give out to industry, but very few, actually none, of our clients are willing to look at the XML documents to QA them. When this happens a quick XSLT later and they have HTML documents to do the data QA on, if any changes have to happen to the original XML then all we do is regenerate the HTML again.

One of the features of XSLT that I've known about for a long time and not really used is the "key" element/function set. I haven't used this as it has been quite badly supported within XSLT processors, however this doesn't matter so much when I'm doing this data preparation work as the only place I need it supported is within the Sonic Stylus Studio processor, which has excellent support for all of XSLT. I haven't checked out the support in most Java processors for a while, I imagine it is much better.

Well today I finally found a reason for trying "key" stuff out again. I am preparing some data, a rather large single XML data set from a client for import into a system we are building in Open Harmonise. Some of the data associated to parts of the XML contain Value Codes which need to be converted in the correct Paths to the Values within the Harmonise repository. I published a set of files which provided mappings from Codes to Paths and used these in the main XSLT. Unfortunately these are rather large files with several thousand mappings in them, and they are referenced several thousand times as the main XSLT is run. This meant a 9 minute wait for the processing to finish.

XSL "keys" are designed specifically for this purpose, to make lookups very, very fast. There is only one problem, you cannot directly reference an external XML document (using the document() function) from a "key" element. After some hunting around I found the solution, and here it is.


<xsl:variable name="externalDoc" select="document('doc.xml')/child::ROOT"/>

<xsl:key name="lookupKey" match="CHILD" use="@id"/>

<xsl:template name="lookupID">
<xsl:param name="lookup">
<xsl:param name="ID">
<xsl:for-each select="$lookup">
<xsl:value-of select="key('lookupKey',$ID)/child::NAME/."/>
</xsl:for-each>
</xsl:template>

The variable at the start contains the external XML document's root element but the key is setup as normal, as if the key was being used in the main context document. The named template is created so that we don't have to have this XSL code all over the place every time we want to use the key.

The template takes in 2 parameters, the first "lookup" is the variable that we set up at the start pointing to the root of the external document, I did this because in my XSL I had several external documents I was using. "ID" is the other parameter, which is the value we are going to pass into the key.

We then use a "for-each" element on the lookup variable, which remember is the root element of the external XML document. While using a "for-each" on a root element may seem silly, there will of course be only one of them, this is done to set the current context to that XML document. Keys always operate within the current context, therefore the key will now work with the external XML.

There are much simpler ways of doing this, simply navigating the external XML without have to change the context to use keys, however this solution brought my 9 minute XSL processing down to a more managable 30 seconds.

Saturday, December 18, 2004

Did HP just try to screw me?...

Some time ago I told you that I ordered a HP dx2000 Linux machine direct from HP. It was on two week availability, so I sat back and waited. It should have timed perfectly with the turning on of my broadband access, which by the way is on and perfect. After a while HP took the money out of my account and so I figured that the machine was on the way, when it didn't arrive after a little while I thought I would give them a call to check on the delivery status.

When I called them, as well as some really cheesy Christmas music which was great, I was told that they'd stopped supplying that machine. They said I could talk to the sales team and order something else or just get a complete refund.

So my question is, exactly how long after taking my money were they thinking of telling me that I wasn't going to get anything? I took the refund and immediately ordered something from Scan, which for only £100 more is actually a much better machine and I still didn't have to buy Windows so all is good.

Thursday, December 09, 2004

Using our tech to build a paper based system...

I had a meeting with the clients for my current project to work on their requirements. This project is to review a set of sector specific controlled vocabularies and their application to thousands of resources. We are providing a web based system on Open Harmonise that will contain the master copies of the vocabularies and resources and their metadata, allow for the edits and output machine readable copies in XML. So we are providing the technology and the client is providing a small army of sector specialist to do the editorial work.

All well and good, not an unusual project for us, except that I found out in this meeting that the sector specialists doing the editorial work have decided that they want to do this on paper! Granted they are not technical people, but a web based system isn't that hard to deal with.

Well our client is wiling to go along with this, and provide staff to take the paper forms, once filled in, and input the data into the html forms. So I'm building a lot of pages to publish the forms into PDF files to be printed off.

I can understand the reasons for this, but that doesn't stop the project from going against almost everything I believe in as a developer. Should be an interesting project at the least.

Wednesday, December 08, 2004

Lessons Learned #3: Unicode in SQL...

We are continuing our adventures in ensuring that Unicode is fully supported throughout our application and a colleague of mine solved the last of our issues. We were loosing the Unicode character during the round trip to the database, which in this case is MS SQLServer 2000.

SQLServer 2000 is fully Unicode compliant so it all should have worked. All the columns that might contain Unicode character were set to the n* types (nchar, nvarchar and ntext) and yet we were still loosing the information.

Apparently the solution is the prefix the string literal with a "N" character, as in;

update bob set surname=N'unicode surname'

This prefix is SQL92 Intermediary and SQL99 & SQL2003 Optional. Here is an extract from the only place in the SQLServer 2000 help files that I could find this vital information;

"Unicode strings have a format similar to character strings but are preceded by an N identifier (N stands for National Language in the SQL-92 standard). The N prefix must be uppercase. For example, 'Michél' is a character constant while N'Michél' is a Unicode constant. Unicode constants are interpreted as Unicode data, and are not evaluated using a code page. Unicode constants do have a collation, which primarily controls comparisons and case sensitivity. Unicode constants are assigned the default collation of the current database, unless the COLLATE clause is used to specify a collation. Unicode data is stored using two bytes per character, as opposed to one byte per character for character data."

Hope this helps, I know that we're very glad we found it.

Monday, December 06, 2004

Lesons Learned #2: Character Encoding

Since I've spent the last few work days dealing with character encoding issues I thought I would write up some useful points to remember.

1) Internally Java is excellent at maintaining character encoding, so you don't have to worry about how you deal with Strings and Characters within your application.

2) The points where character encoding will become an issue are at the boundaries, where you are sending or receiving data.

- When using streams to read/write data to/from a network connection or a file you should declare the encoding you want to use, e.g. "UTF-8".
- Ensure that any databases you interact with are correctly configured for the character encoding that you want to use. For example in SQLServer you will need to make sure that all String fields are either nvarchar or ntext, not the usual varchar and text types. You may also have to configure the database as a whole to ensure that connections use the correct encoding type.

3) Swing components will all support Unicode, however you will need to ensure that the font you are using supports the Unicode ranges that you are using. The default fonts delivered with Java will support most of the Unicode set, but this does not include Japanese, Chinese and Korean glyphs. MS Windows comes with "MS Arial Unicode", however you may be better off using code to find a font that will support the Unicode ranges that you need.

A good reference is the following;

cover buy from

Sunday, December 05, 2004

Things are a changing here at LargeHQ...

The ongoing saga of computer changes here at home continues with the news that I am finally giving up on this machine (far to old and slow to use as a main workstation) and I have ordered a new machine. A really simple and cheap machine from HP, which comes without Windows.

I also finally ordered broadband, 1Mb line with Bulldog, which should be in place before Christmas. So very soon I should be back up to full swing here.

If I could go back in time...

The first thing that I would do is ensure that all developers at the start of computing agreed to Unicode 2.0, or something else. I don't really care what standard they chose, as long as it works and that they all used it. As time goes past this should lead to me being able to tell my boss that foreign character support in our application is a given and therefore save us several days of testing and head scratching wondering where our Japanese characters have gone.

Character encoding and date formatting are the two things that I hate more than anything in programming. Why can't this be more simple? Why is it that I am restricted to using a specific font to support all the characters when it couldn't be too hard for OS vendors to enable the font subsystem to fall back to a generic Unicode 2.0 full supporting font for individual characters that the desired font doesn't support.

Monday, November 29, 2004

My first post from Linux...

Okay, so maybe that's not such a big thing to most people, however I'm really pleased. This would have happened last night except Blogger appeared to be having problems. It's been a long time since I first posted about my move away from Microsoft, and although it has finally occured in part due to a machine failure, it is good to finally be here.

I should talk a bit about Suse 9.2 Professional, which has been a joy to use so far. I am looking forward to trying it out on a more powerful machine as soon as I can. One thing that has surprised me is that it's not as polished as I thought it would be. I've never been under the missapprehension that a Linux distro is going to be completely polished but I was thinking that Suse would be, given the way Novell are talking about it. Linux is still now where near ready to be a replacement for Windows for many many people. Luckily it's fine for me.

It has taken me a little while to get my development environment up and running on this machine, and it is far too slow to do much with, but I'm going to try to get back into Quintanona tonight.

Linux at home...

I'm finaly up and running again with the Suse 9.2Prof Linux machine, it's quite old but it will have to do till I get a new one. I also managed to get it dialing in to an ISP so I'm online as well, if you can call a 33.6K connection online these days. Have to sort out broadband soon.

Things at work are settling back to normal after the Open Source launch, although this week we have a stand at the Online Information exhibition at London Olympia, so I am manning the stand on Wednesday and attending on Thursday.

I have to give a mention to my best bud Craig who is always there for me when I need him, I wouldn't have got this far in life without you dude.

Hopefully back to more regular and development focused updates this week.

Tuesday, November 23, 2004

Open harmonise launch...

Sorry I've not posted in a while, I was away in New York for a week and came back to the mad rush to get everything ready for the Open Harmonise launch which happened last night at the Institute of Directors here in London. You can now download the full source and read all of our documentation online at the Open Harmonise website.

The launch event went extremely well with about 60 people from leading public and private sector organisations. Our CTO Grant Cocks and CEO Patrick Towell delivered presentations and Ray Ayers (Managing Director of Silicon Valley) and Dominic Savage (Director General of BESA) also spoke.

It's great to finally have Harmonise out into the community, and it's been a learning experience getting it there. I am sure it's going to continue to be so over the next few months. We are going to get the binary releases out the door as soon as possible.

Tuesday, November 09, 2004

Disastrous evening...

When events are overtaking you and everything is going wrong there can be a moment of complete clarity. In that moment you know exactly what went wrong and when it happened. I'm not talking about regrets, those are a failure to appreciate a life lesson, no I am simply talking about a realisation of the point where things could have gone differently.

Last night I had one of those moments, the clarity of vision showed me that I really should not have written that post yesterday. I should not have "spoken too soon", because last night it all went wrong.

The evening didn't start well when SUSE Linux informed me that Server#2 did not have enough memory for the YAST2 installer to run. I really hadn't realised what a bad spec that machine is, it's just run happily for so long that I've not had to think about it. However a 200MHz AMD K6 with 64Mb RAM probably isn't going to be much use.

After having failed to be able to install Linux on that machine I decided that I would do some programming to cheer myself up, of course this would involve booting up my main machine. This machine has been on its last legs for a while, since sometime in January it has begun to fail to boot, claiming that it has suddenly lost ACPI support. This was an issue with the motherboard, so a quick "Load BIOS defaults" and reboot would normally solve it. Last night it wouldn't even get past the POST check, no boot screen came on it's just dead.

I know it's strange but I am quite sad about this, I've had this machine for a while, it's seen me through a lot. I am lucky though, I'd recently copied all of my important data onto the other machines, even the stuff that I was working on last weekend.

For the time being I have made the move to Linux, with that machine being the only viable workstation I have. I now have to debate whether it is worth attempting to resurrect that old machine with spares, but I definitely need to start looking into a new PC, something I'd hoped to put off for a little while.

Monday, November 08, 2004

My network upgrades

Had some fun last weekend upgrading the home network. Previously I simply had a couple of machines networked via a crossover cable however, as those who read regularly will know, some more machines have decended my way and I made the decision to move to Linux.

Suse Linux 9.2 Professional

You should know that I'm a complete Linux newbie, not inexperienced or a novice, I can only truly, hand on heart say that I'm a newbie. Many people at work have given me help and advice, although this can be a little offputting when they are all advocates of different distributions (Mandrake, Red Hat and Debian). But thank you to you all, you know who you are.

Saying all that though, I have found my first few days with the latest Suse a breaze. The installation went perfectly on my chosen first sacrifice (currently called simply Server#1). I managed to get ftp, telnet and Samba all working happily with my Windows machines (albeit after remembering to allow the ports in the Suse firewall).

In the current setup all the machines have fixed IP addresses, I decided against setting up a DHCP server on Linux as I'll be getting broadband in the next few weeks and I will leave the address serving to the router.

New network

This means that I currently have three machines on the network, more than a simple crossover cable can handle. So I also spent the weekend getting some new network hardware up and running. A D-Link 10/100 8-port switch, lots of Cat5 cabling and a KVM switch.

Only one slight hiccup...I had not thought to check whether all my machines have PS/2 ports for keyboard and mouse, or course one didn't. The oldest of my currently running machines, it never occured to me to check for that.

With the help of some friends (both very good sys-admins) I managed to track down the correct adaptor for the keyboard (already have one for the mouse) so this should all be working by next week.

T.B.D.

Next up is installing Linux on Server#2, then thinking of better names for them all! After that I am waiting for some memory and HD drives for the final machine and I am all done until I get a new workstation to replace my main machine which must remain Windows :(

Just so you know I'm in New York from Thursday for a week, so you won't hear from me. When I get back we are into the final few days before the launch of Open Harmonise, very exiting.

Monday, November 01, 2004

Why do I blog?

Had an interesting conversation with a colleague in the pub the other day. We got to talking about why I write this blog, I think he was a little surprised that I am doing this. To tell you the truth I think I am too.

I have been fortunate to be in a job that I love since leaving university four and a half years ago, and while I have no intention of leaving anytime soon I have also become more and more aware that one day I may have to. Thinking about my future has become a major preoccupation. While this doesn't answer the question it is an important piece of context that you should know.

The trigger...

Until a little while ago I had a low opinion of blogs, I thought that they were overblown diaries for total extroverts. I couldn't imagine why I would want to tell the world what I ate for breakfast let alone why the world would be interested. Of course I was wrong, I was shown the error of my ways by a good friend Kieron Gillen. Reading Kieron's blog showed me that they could be more than a simple online diary, that they could be used in a creative fashion.

Kieron used to be the deputy editor of a major games magazine, and has since gone freelance. He still writes articles and reviews about gaming but he is also perusing a career in comics writing and many other forms of writing. His blog is very simply titled "Kieron Gillen's workblog", this is apt as this is the single point where you can find references to all of his work, which being freelance appears in many weird and wonderful places (occasionally your doormat when you are least expecting it!).

It is true that his blog is partly self-promotion, but then he is self-employed so you've got to let the guy take every opportunity, however he has made it so much more than that. From his 2am, ever so slightly drunken, ramblings to Negativeland, his recently completed (experimental?) online comic. He is always willing to respond to comments, and actively encourages feedback.

While I don't claim to be anywhere near as creative or talented a writer as Kieron, I like the way in which he very simply sets out his stall with his blog. He lays out his works and gently invites people to browse and critique. This is what I admire.

The blogging experience!

One of the observations that my colleague made was about the balance in my posts between information and opinion (I wont say what he thought the balance was), we talked about these different blogging styles. I guess I am still trying to work out my style, still trying to find what's best, but the experience as a whole is worth it. My writing is improving, although some I'm sure would disagree, and I am trying to be more and more serious about it.

Answer the damn question!

Okay, so why do I blog? I guess like Kieron I am setting out my stall, showing my wares. I am trying to raise my profile and see if people are interested. My goal I guess is that in a few years time the most relevant part of my CV will be a link to this blog, so that people can really understand where I am coming from and what I know, or think I know. If in the future I sit down to an interview and can spend time discussing with people about topics I've posted about, then I've done well, I will have achieved something from this.

Whether or not any of this would help me get a job is irrelevant, especially at the moment, but blogging is helping me to find my voice about the things that are important to me. I guess that is the answer.

Quintanona: Web Services Intermediary framework for Axis

I spent some time over the weekend on Quintanona, which is my WS Intermediary framework for Axis that I've talked about before. The last time I posted an update on this I told you about the problems I was having with immutable SOAPElements in Axis, well I've gotten around that and changed Quintanona a bit in the process.

When implementing a request or response handler method the only parameter you take in is a QuintContext object, from which you can access DOM elements from the SOAP message, header and body elements. You can alter these elements and these changes get passed onto the actual service.

I also started to add functionality so that you can place a Runnable object into the QuintContext when handling the request, this will run in parallel to the main service request. Your response handler will only be called once both your Runnable object and the main service have returned. This should enable you to build higher performance intermediaries.

Once that is finished I just have to finish the Unit Tests and build some decent examples and I will be ready for a first release.

Apparently I built REST Web Services in 1999...

I do like it when an idea that has been around for a while gets a definition. A recent good example would be Podcasting, where enclosures for RSS had been defined for quite a while and yet not until the perfect application for these came along did it gain a definition and therefore wider recognition.

Being slightly obsessed with Web Services I have of course taken an interest in the SOAP V's REST debate. For a while I didn't pay attention, but as REST got more and more coverage I thought I had better take an interest in this "alternative". The following is the accepted definition of REST from Roy Fielding

"Representational State Transfer is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through an application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use."

Therefore, in the simplest explanation, REST Web Services are those where the request is contained entirely in the URL and the information is returned as XML.

REST advantages

Obviously REST Web Services have their advantages, mostly they are all about being easy to use. The learning curve for a new REST Web Service is very shallow, being very closely aligned with the founding block of Web technologies HTML over HTTP. In fact the only difference is that a REST Web Service will return you XML instead of HTML, this is because HTML over HTTP is in fact a REST system.

The definitions for REST Web Services can be as simple as one or more URLs and XML Schema defining the return information.

Finally REST Web Services are very simple because they do not require complex APIs or frameworks for access.

-----------------------------
Note: Technically SOAP Web Services do not require APIs or frameworks for access, however they are a hell of a lot more complex if you don't.
-----------------------------

Having finally understood what REST Web Services are all about there seemed to be something familiar about them, but I just couldn't put my finger on it...Till now.

So 1999 then?

Ah yes, returning to the title of this post. Recently my ex girlfriend, who I was seeing while finishing my Computing Science degree, was clearing out her MyYahoo! Briefcase and came across what would appear to have been the only surviving and complete copy of my final year project. She very kindly e-mailed it on to me.

Well this was a blast from the past. I hadn't thought about this in a very long time, and it turns out the thoughts I had were wrong. You see I was lucky enough to be in gainful employment about a week after my last exam, and so I didn't really take much interest in my results or think about university at all for a good long time. Going so quickly into the real world of programming was a shock to the system to say the least, so I began to assume that everything that I had done before was laughable.

Taking this into account you can begin to imagine my surprise when reading my project report to learn that I had in fact built REST Web Services for it. Of course I didn't know this at the time because I had never heard of the terms REST or Web Services.

Web PMT!

The idea for my final year project came to me while in Munich on a years work placement. I spent a large part of that year building an online issue reporting tool in Perl. Before then I had done my best to avoid programming, I wanted to be a support engineer (I know, what was I thinking!). However, building this tool, sometime in '98 and '99, I really began to find my calling, which was web based applications. I could see how cool this work could be. I taught myself ASP with the now defunct ZD University (Ziff Davies subscription online learning site).

The issue reporting tool was very text heavy, typical Perl app, but I began putting charts into it, using the age old trick of stretching 1 pixel images to the correct proportions. This got me thinking about using the web to build online visualisation tools, and the most familiar to me at the time were project management tools. Hence my Web Project Management Tool was born. Please ignore the stupid acronym, I was being a "kooky" student.

This concept of an online project management tool developed further in my last few months in Munich, there were several problems that I needed to solve before moving forward. The first of these was to turn this idea into something my university would accept as a project.


"Data Visualisation and the Web"

In the end that was my title, the project was slanted towards being research into how new technologies could enable data visualisation and manipulation to be pushed down into the browser.

While I had my project, the goal and the title, I still did not know how I was going to fulfill this. All I did know was that stretching 1 pixel images was not going to be enough, so I began hunting around for a better solution.

Eventually I stumbled upon Vector Markup Language (VML), Microsoft's precursor to the Scalable Vector Graphics (SVG) standard. You should remember that I was a very inexperienced programmer at this point, so I had no idea what I was getting myself into. I was not fully equipped to evaluate the suitability of a technology, if I was I probably would not have gone down this route.

Data down the pipe

My last problem was to work out a way to send data to the browser independently of the normal HTML request, I wanted the visual layout information and the data to be separated so that the client did all the hard work. One slant to my project was to show a solution that would be good for companies with very low spec server systems by only having them deliver information and not perform much processing.

I had selected VML as my visualisation technology, not really knowing anything about it. The first stage of my education in VML was to understand this XML thing that people kept talking about. It took me a few weeks to finally get what all this was about and that VML was expressed in XML and what that meant. I had the idea that I could perhaps encode the application data as XML, but all the examples I had seen had used XML from static files however I needed to feed dynamic data to the client.

It may sound strange now, but you must remember that this was all very new to me. So I tried a little experiment to see if I could use an ASP page to generate an XML document on the fly rather than have to save it as a static file. I was amazed and overjoyed when this worked. Technically I had created my first REST Web Service.

The final year project

The project itself progressed slowly, there were many teething problems, mostly to do with strange behavior from VML. I didn't manage to do as much as I had hoped. I had really wanted to get two different charting techniques implemented (GANTT and PERT) and show how the visualisation could be changed without reloading any data.

Still, I did manage to create an interactive GANTT chart which could be refreshed with only a data load rather than a complete page load in the browser.



GANTT chart with sub-tasks rolled up.



GANTT chart with sub-tasks exposed.

Should I be proud?

This trip down memory lane has been a strange experience for me, I am left with some questions. Looking back on this should I be proud of what I achieved in the context of the time? The title of this post says that I created REST Web Services in 1999, but I have to admit that I didn't really know what I was doing so perhaps it was random chance that gave me this opportunity.

I have a strange belief in fate, so coincidences don't bother me as much as other people. Because of this I am not so freaked out that the company I ended up working for has been a heavy user and supporter of XML technologies. Barely a day goes by when I am not dealing with SOAP, RSS, RDF or WebDAV documents. I do understand that I was lucky my choices back in 1999 prepared me for all of this.

So there you are, the tale of how I managed to stumble across something new whose time had not yet come.

----------------------------
Note: If Rob Kinmond, my final year project supervisor from Staffordshire University, should ever happen to read this, please get in touch, I would love to catch up, to find out if you did run that course on XML the year after.
----------------------------

Developer Support #4: Review of "Real World Web Services by Will Iverson"

cover buy from



This book turned out not to be what I was expecting, which is not to say it isn't good, just a surprise. All the blurb about it from the O'Reilly site to Amazon to the back cover do not sell this book correctly. Even the preface doesn't seem to quite convey what it is truely about, for that you need to wait till page 25;

"The primary focus of this book is not on creating your own web services but on using existing web services in productive and useful ways."

There it is, the prefect description. The book takes the reader through code examples for innovative ways to utilise the Amazon, eBay, Google, Fedex and CDDB web APIs, all of these examples are in Java.

Will Iverson is someone who really gets, and wants others to understand, that the real benefits of Web Services come when you begin to forget about the details of them and use them simply as APIs, as you would any other. He rarely distracts you with the details of SOAP or WSDL, except for to generate higher level APIs. However he does highlight the potential problems of Web Services by talking about the facilies of network connectivity.

While short, all of the APIs represented are explained, with good examples, enough to get you thinking about ways in which you can use them. This book is perfect for those people who are just getting into Web Services and want to understand what can be done with them, why they are potentially so powerful. Because of this, Real World Web Services is not recommended for those with any real experience of their own.

Thursday, October 28, 2004

What I've been up to...

This summary is not available. Please click here to view the post.

Tuesday, October 26, 2004

The month Microsoft lost me...

This month I turned 27, maybe that's got something to do with this, maybe not, but something about me has changed. The tide has turned, and boy has it turned. It's not that I've suddenly gone anti-Microsoft, it's just that this month I started to think that my interests will be better served without them.

As changes go people might think that this is not a big deal, but for me it is. I have been mocked for most of my "adult" life for being very pro-Microsoft, and quite frankly for good reason. Until recently they've always done well by me. Not that they know this, I have had no direct contact with them, but they have been a major factor in my life as a techie up to this point.

In the beginning

You see I wasn't going to be a developer, in fact I wasn't going to be anything to do with computers. I was always going to be a sound engineer, I was going to mix live concerts, record the great orchestras and bands. But that was the early nineties, when I was in high school, and that was when the recording industry began to go digital. The first ADAT machines were launched, 8 track digital tape machines recording onto S-VHS video tape. It was when MIDI was branching out from simple drum machines and when SMPTE synching seemed like the most important task in the world.

Obviously to be the best sound engineer I knew that I had to learn all I could about this strange new digital world ... that was when I became hooked on computers. I saved up for 18 months till I could afford my first machine, a 486 with 8Mb RAM and 40Mb Hard disk. Wow, was that a machine that gave all it could. I still have most of the parts lying around, and the case has a Cyrix chip motherboard wedged in there now.

---------------------------
As a quick aside, it is funny how things stick with you. Even now, as I am writing code and designing systems, I still think of things as sound channels. Inputs are microphones which are pulled together in the mixer which can process the signals in many combinations to produce different sounds. Channels have outputs which can be fed back into other systems as inputs until one is considered the final output and fed back to the user.
---------------------------

Windows 3.1 - 2000

So back to our story. In those days I was a wet behind the ears computer newbie, and so Microsoft was king. I started with Windows 3.1, a great operating system but one which was bettered by Windows for Workgroups 3.1.1. Now that was the greatest of the pure 16bit operating systems from Microsoft. With its 32 bit filesystem access, add in WinG (the predecessor to DirectX) and the Win32 extensions (which given a push would run Office95) and you had a great and stable system.

Of course I was utterly fascinated with all of this and constantly consumed knowledge about computers. So much so that Win 3.1.1 was soon not enough. It was then that I first braved the world of beta operating systems, and for most of the next 5 years my computers always had some beta software on them, mostly the os.

I went through 5 different Win95 betas onto the production code, then Win95 OSR2 (when Microsoft finally got it right). After that came Windows NT 3.5.1 with the Alpha of the NT4 desktop (the first Win 95 like desktop for NT), that never really worked. Next up were a couple more Win 9x betas till finally the leaked (as in Microsoft Germany "accidentally" opened up the FTP server and didn't close it for a week) Windows 2000 code, at the point when it was still called NT5.

---------------------------
As another little aside, my friends found it hilarious that after warning me off of installing a Windows beta it booted up to reveal its build number as 666. This was the only beta I ever had trouble with, and in fact that wasn't Windows' fault.
---------------------------

I love Windows 2000, I still use it at work, and at home I am writing this on Windows 2000. But that is soon to change.

What I really need

So we are back to present day, and the title of this post tells you that something has changed, that I am, or at least am about to, move away from my reliance on Microsoft. I suppose it started towards the end of last year. In fact last autumn, when I began to start coding at home again, trying to recapture the fun of being a developer. I was really starting to get back into it when the proverbial "shit" hit the fan at work, and so began another drawn out sequence of late nights in the office, some right through till morning. This obviously put somewhat of a crimp in my home coding time.

Flash forward to January of this year when a good friend of mine is living on my sofa after having just moved to London. He wants to use my computer to hunt for jobs on the net, but it doesn't appear to be playing ball. I've not use it for three months by now, so I really don't know what state it is in, but something has definitely gone wrong, at the BIOS level. It's old anyway, so I don't really bother fixing it, thinking I'll get another one soon.

Flash forward again to the summer, and I actually want to, finally, get back into my home coding and still have not fixed my computer. It's just about usable, if I reset the BIOS on each boot.

At around this time I got an offer of some old machines from a friend of mine who works for a large London university. These machines are old, slow, but I had a need for a few specialist servers for running CVS/Maven/Tomcat etc and these seemed to fit the bill. However I knew that they would have to be running Linux to be able to get any real use out of them. Beside I like to remain legal and the thought of paying for that many MS Windows Server licenses freaked me out.

At that point I had decided to set up three of my computers as servers with Linux, keep this one as Windows for .NET testing and running my Matrox rt2500 with Adobe Premier and get a new workstation. Then a thought struck me, I asked myself, "What is it you really want to be doing on your computer?" and the answer was simple. Write code and articles.

The wonderful world of Open Source

As a developer I understand and truly appreciate the world of Open Source. I use OS APIs all the time at work and at home, and as I'm sure you've read in my other posts, my company is about to release all of our software as Open Source. Perhaps this had something to do with it as well, maybe I was finally ready to shed the shackles of commercial software reliance.

Or perhaps not.

The Novell Effect

I've always liked Novell. I went to a presentation from them the year they launched NDS (Novell Directory Services) and it blew me away (that was when I still wanted to be a network guy not a programmer), it also made me want to cry because I knew that they had already blown it and were being quickly overtaken by Microsoft. But I guess I underestimated them. Their come back with Linux has been inspired, I don't think they could have done it better.

Porting Novell Netware to run on Linux is a no brainer, but buying Suse so they could offer a version of Linux with their convoluted Unix IP insulating people from the insanity grenade that is SCO and their lawyers, is fantastic.

But they have gone further than that, fully embracing the Open Source world and becoming a partner with it. So I guess the imminent release of Suse Professional 9.2 probably had something to do with it as well.

Linux and me

So I had asked and answered my question about what I really wanted to use my PC for and Linux offered it all. I had realised the cost of legally supporting Microsoft on all my machines and almost fainted and finally Novell had entered the game and really impressed me. So here I am, enjoying the last few weeks of Windows before I say goodbye to it, at least for now.

I cannot, and will not say that I will never be back. To tell you the truth all the Longhorn stuff I've seen looks and sounds really great (even without WinFS). But right now Linux is the right choice, I know this. I think Microsoft has some really big troubles ahead of it, they are on for a 2006/7 release of Longhorn, where will the Linux desktop and server be by then? Another 4 or 5 releases, another kernel. Look at the money that is pouring into Linux from IBM, Novell and Sun. I've enjoyed my time using Microsoft software, I still won't say much against it, but I am really looking forward to my time with Linux.

---------------------------
Final aside. Although I had already made my decision to move to Linux, something else happened that would have pushed me over the edge anyway. PC Pro, a fairly Microsoft leaning magazine, did a group test of Office software. This is the first time they've done a group test like this that I can remember since Office 95. In the test Open Office won (with Star Office recomended for businesses that wanted to pay for the support). They went to great lengths to say that it did not win simply because it's free. To me that's about the biggest IT news of the year!
---------------------------

Tuesday, October 19, 2004

Open-Harmonise details: Publishing engine

In my previous post (More details on our open source initiative...) I promised to write a series of posts providing more details about Open-Harmonise. I decided to start this series with a description of the publishing engine.

This article only covers some very simple examples, what it does not show is the true power of Open-Harmonise which is building pages through complex metadata matching. I'll show you this another time once we have covered the basics.

End-To-End XML

One of the main features of Open-Harmonise is its end-to-end XML publishing. Many different publishing frameworks/engines talk about this, and some do do this. Our approach is perhapse a little different.

The core of the system takes/pulls in a whole bunch of XML from different sources, it then publishes the data pointed to by this XML into a final XML output document. After this it is down to the developer to assign an XSLT/FO translation to this XML output to get the final published output (HTML, WML, PDF etc).

There are three main XML inputs to the Open Harmonise publishing framework;

1) State XML
2) Page template XML
3) Object template XML

These contain the rules that describe the contents of the output XML, these rules point to specific data to be published or describe searches to be run over the repository the results of which will be published into the output.

The data fed into the output XML, based on these rules, can come from any Publishable object, either one of the built in objects or from other objects that developers add to their Open-Harmonise server.

This article is going to focus on these XML rules files and the XML output, leaving details of the data sources till later.

State XML

One of the most powerful features of Open-Harmonise is that it is abstracted away from the name/value pairs of HTTP requests. Instead the publishing engine expects an XML request to operate on, for example;




<state>
<page id="1001">
<referer>
<page id="1000"/>
</referer>
<session id="D_TagLSNNxiLWK1U5BasjO">
</state>


The above is a very simple example of an XML state in Open-Harmonise. The referer information is generated automatically if it is available, however the rest must be passed into the request. Now obviously, while XML can be posted into a HTTP request, this is not something that can be done with a simple HTML link.

As I said before, the Open-Harmonise publishing engine is abstracted from the name/value pair of HTTP request, in fact it is abstracted from any type of request. It can just as easily be place behind a Web Service. All of this is dealt with by protocol handlers, included with Open-Harmonise are handlers for dealing with;

1) HTTP name/value pair
2) HTTP post

The second of these simply grabs an XML document from the HTTP post request, therefore the above State could be sent as a complete XML document. This is typically how we handle requests from Flash applications.

The first of these is how HTML links are handled. It requires that the State XML is encoded into name/value pairs before the request is made, i.e. in the link on a web page. This handler then decodes these back into XML.

The encoding scheme is very similar to XPath, therefore will be familiar to anyone who has worked on XSL coding. The State example given before would look like this.

http://webdav/servlet/XRM?page/@id=1001&session/@id=D_TagLSNNxiLWK1U5BasjO

Although the referer element in the example XML will have been generated automatically by Open-Harmonise, you can probably see how you would create such an element as a name/value pair.

referer/page/@id="1000"

When creating the XSLT for translating output XML into HTML there are several utility xsl:templates to assist in creating such complex links.

Page Template XML

In the example State XML from before there are references to Pages. These are definitions of logical pages within Open-Harmonise. A logical page is made up of a XML file for the Page Template (the rules for the page contents, that were mentioned before) and a XSLT file for translating the output XML into the final desired format. In this section we will look at the Page Template XML and its role in the publishing engine.


<HarmonisePage>
<PageTitle>Test page</PageTitle>
<Navigation name="mainnav">
<Template id="10">
<Section/>
</Template >
</Navigation>
</HarmonisePage>



This example of a Page Template XML file includes a static title for the page and a navigation group called "mainnav". Inside the navigation group a Section object will be published. Sections are the grouping objects for Documents and Assets, they normally form part of the administerable structure of a website built on Open-Harmonise.

The Section element, in this example, has no identifier (id attribute). It could, it could instead have a Path element under it as an identifier. Without any identifier the publishing framework will first of all check the State XML to see if there is a Section element in it which does have an identifier. If there is this will be the Section that is published. If there is not, then a new Section will be created and published.

This last case, may at first seem like a last ditch attempt to publish something instead of throwing an error, however this is not the case. This is a the basis for publishing web forms. For example if you wanted someone to be able to register themselves on your site, you would publish a new User object. When this is submitted to another page, encoded as XML, it will appear in the State XML for that page and can then be saved as a new User.

These are just a couple of examples of what you can do with the Harmonise Page Template XML, however the schema has many elements in it. Here is a list of some of the more useful ones.

* Search - publishing a search form
* List - publishing a list, which could be the results of a Search or a match using information from another object from the State or the currently logged in User
* include - XInclude for building pages from fragments

Mostly elements in the Page Template XML match to Objects within an Open-Harmonise server, so you can easily add to the power of Open-Harmonise by developing your own Publishable Objects.

Object Template XML

In the Page Template XML example shown in the last section there was a Template element surrounding the Section element that we wanted to publish. This points to an Object Template XML file, which tells the Object which elements of itself to publish. While we could put these XML instructions directly into the Page Template XML, these tend to be very small reusable parts, which is why we split them out into Object Templates.


<Template>
<Section>
<Name/>
<Summary/>
</Section>
</Template>

This above example shows how we can tell the Section object to publish its Name and Summary information. These elements are generic to almost all Open-Harmonise objects along with other elements such as Profile, which tells an object to publish its metadata information.

Objects within Open-Harmonise can also have elements specific to themselves, for example Document (which contains an XML document) has a Contents element.

Because Templates are so small and reusable you can quickly build up a library of often used ones. Doing this enables to you build sites more and more quickly.

Conclusion

In this article I have given you a basic introduction to the Open-Harmonise publishing engine. As you can see it is designed to be very simple to implement with. However using these simple techniques and XSLT/FO to translate the output XML you can create complex information solutions, as we have for http://www.nc.uk.net and http://www.designcouncil.org.uk.

Monday, October 18, 2004

RSS feed change and update on WS intermediary framework...

Some people may notice that I have changed the way in which this blog is syndicated through the RSS/atom feed. I am only putting the opening paragraph out on the feed now, you will have to following the link to the main site to see the rest. This is a bit of an experimentation, trying to find the balance between getting the information out there and driving traffic to the site.

In other news I did more work on the Web Services Intermediary framework, that I've been creating, over the weekend. I did a lot more testing of the Axis deployment integration by placing the dummy intermediary in front of as many of the Axis samples as I could. It worked fine with all of them, eventually.

I did finish the weekend banging my head against a problem, which I'm still to sort out. It appears that in some way the Axis SOAPElements, at the point in the chain I am operating, are immutable. However they are not throwing any exceptions when I call mutating methods on them. Going to have to look into that one.

Finally, expect a further update on the Open Harmonise initiative sometime tomorrow. I'll be posting some more details about the publishing engine.

Friday, October 15, 2004

More details on our open source initiative...

A few days ago I posted that the company I work for, is going to open source all of its software, however I gave very few details. Partly that's because it was the end of the day so I didn't have much time and also because a lot of the details were, and still are, to be decided. But I felt I should follow up with the information I have.

As I said we are open sourcing pretty much everything we have in our source tree, except old versions of things and some blind alley test/experimentation code. Almost all of the code we are releasing has in some way gone towards building our Harmonise(TM) platform. This is a quote from the link back there;

"Harmonise(TM), a suite of software tools, to support the specialist work of designing and implementing information standards and architectures."

The plan is to release the code and supporting material sometime around the week of the 22nd November under the name Open-Harmonise. Within the Open-Harmonise community there will several independent projects and the Harmonise platform, which utilises the other projects.

Examples of sites running on Harmonise

National Curriculum Online: http://www.nc.uk.net/

Design Council: http://www.designcouncil.org.uk

National Theatre: Stagework: http://www.stagework.org.uk

Simulacra corporate website: http://www.simulacramedia.com

Open-Harmonise Key Features

- Complex metadata support. All objects can have any metadata, that is defined in the system with the appropriate Domain, attached to them. This can then be used for searching/matching.
- CMS for XML/link and binary resources.
- Simple Workflow creation and applications.
- Role based security.
- Full versioning of all resources.
- WebDAV interface.
- End-to-end XML publishing framework. XML templates in, XML out, XSLT/FO translation to output format.
- Java, therefore platform independent.
- Only required a Servlet engine, no J2EE dependency.

Open-Harmonise Projects

Here is a quick run down of how we think we are going to split things out into different projects, please remember that these details are not confirmed and that the names of the projects have definitely not been confirmed.

Open-Harmonise - DataStoreInterface

This is a database abstraction layer based around SQL Object representations, e.g. Select object, Update object, Column Reference etc. As part of this project we have implementations for MS SQLServer, Oracle, MySQL and Firebird. It means you don't have to worry about which database you are talking to. This covers both DDL and DML statements.

Open-Harmonise XML-Commons

This is a collection of XML related utility classes, for example an XML printer which includes pretty printing and namespace resolving. One of the other cool items in the collection is a XML fragment parser which allows you to parse and append to a Node some XML which may begin with a Text Node.

Open-Harmonise WebDAV Interface

This is the WebDAV server interface to the CMS. This supports WebDAV, DeltaV (a variant on linear versioning with some workspace features, DASL (WebDAV searching), WebDAV Ordered Collections and a property definition standard that we have created.

Open-Harmonise Commons

Like XML Commons, a collection of utility classes, we couldn't think of anywhere else to put them. E.g. Object Caches, Abstract Pool, URI wrapper, E-mail (JavaMail wrapper), MimeTypeMapping and JDK Logging: XML File Logging implementation.

Open-Harmonise Virtual File System

Pretty descriptive name, this is the API the administration application uses to talk to the server. Includes Virtual File System implementations for WebDAV (see server code description for supported specifications), FTP and local file systems.

Open-Harmonise Swing Commons

You can probably guess what this is all about. All the utility stuff we developed to support the administration application GUI.

Open-Harmonise Server Core

This is the meat of Open-Harmonise, this is the main server code that you actually run a website with. It includes the content objects (Document, Asset, Section, User, Property etc), their database handling code and the XML publishing framework, along with a lot of other stuff. This is dependent on a lot of the other projects listed here.

Open-Harmonise Information Manager

This is the administration application, from here you can manage almost all aspects of an Open-Harmonise server. The same interface is used for content creators/editors, administrators, information architects and website developers.

Simulacra Labs

Heavily Google inspired naming, but basically this is an incubator for projects that are not fully ready for release. We are seeding this with the results from some of our developer research days. This may include;

- A telnet interface to Open-Harmonise server.
- A FTP interface to the CMS.
- A collection of Web Service methods for the server.
- Lots of other little things that we have tried out over the years.


I am going to post some more details, about the whole open source initiative and these specific projects as soon as I can, but that's it for now.

Thursday, October 14, 2004

XML unprefixed attributes and namespaces...

An issue came up today to do with XML attributes and their namespaces. In the application that I maintain I was readin some XML where the attributes did not have namespace prefixes, but the element did, for example;


<ns1:element att1="val" att2="val">textNode</ns1:element>


This was because the person who had constructed it was working on the theory that the attributes would inherit the namespace of the element. The parser I was using, the JDK default which in this case was Crimson, did not agree.

Neither of us knew the correct answer so we starting routing through the XML Schema specification...this did not provide a clear answer. So we started hunting around elsewhere. This is when we found this page;

Namespace Myths Exploded

Apprently, according to the spec, unprefixed attributes are not part of any namespace. In the end we added the required prefix and all was fine. I am pretty much against using the default namespace in any occasion, I prefer to make everything as explicit as possible where XML and namespaces are concerned, and this has solidifed my position.

Maybe this is a more widely known issue, but we'd not heard about it before so I thought I'd post it.

Wednesday, October 13, 2004

It's official, we're open sourcing everything...

Today we got the official nod that our company is open sourcing all of our software, over four years of development. We are going to be focusing on selling consulting and bespoke integration work, which is basically what we did before.

The launch to open source won't happen till the 22nd of November, but in the lead up to that I am going to post some introductions to the different projects that will be made available. For now you can check out our company site at http://www.simulacramedia.com.

Tuesday, October 12, 2004

Polls, PCs and stuff

Well this weeks poll does not seem to be attracting people. Perhaps one one is interested enough in the subject, which does surprise me. WS-Security is the next implementable WS-* specification so I would have though it would at least be on people's radar. I might leave this poll up a little longer than the last one.

In other news, I am off tonight to pick up a couple of second hand machines from a friend. Still trying to plan out what setup I'm going to have, but I suspect that one of these will be my SCM/build server and the other a Tomcat/MySQL etc test server.

In doing all this, I'm definately going to be moving over to Linux on all but one of my machines. The one that isn't going will be for Windows testing and to run one bit of silly hardware that I can only run in Windows. It's looking likely that I'll go for Suse 9.1 thanless I can be pursuaded in another direction.

Anyway, sorry for the lack of Java/Web Services related postings the last couple of days. Lot's to do at work and still catching up from the codefest that was the weekend.

Monday, October 11, 2004

Web Service intermediaries in Axis

I've spent most of this weekend trying to work out how to best implement Web Service intermediaries in Apache Axis. Turns out this is quite hard.

While Axis is incredibly modularized, allowing you to plug-in functionality at many points, none of these are immediately useful for creating general intermediaries.

I did manage to find a way and got an ugly intermediary working for the GetQuote sample that comes with Axis. After that I began working on a more generalized framework for doing this. I have a basic framework that allows you to very simply implement a class, deploy part of the framework as an implementation of the service you want to act as an intermediary for, add a reference to your class in the WSDD entry for this service and that's it.

Your class then gets access to the request SOAPHeader and SOAPBody and the same on the way back through for the response. You can alter these as they pass.

The hardest part of this has been trying to come up with the requirements for the framework. No one appears to be doing any work with Web Services intermediaries and so I don't really know what people would want from the framework above and beyond.

One annoyance I have is that there does not appear to be an easy way to bounce between Axis' SOAPElement objects and XML DOM objects and back again. I am wondering if intermediary implementer would be okay with SOAPElements or would they rather work with pure XML DOM. I do suspect it would be the latter, as there are far more tools for assisting with that and it is the logical model of a SOAP service. However, I am concerned about the overhead in processing this.

I am a way off of releasing this, I need to add some more functionality, some examples (based on the Axis samples probably) and a really good from scratch example of a complex intermediary. I have a fairly good idea for this last part. Then more testing and some documentation.

If anyone has any ideas of features they would like to see in this let me know and I'll try and get them in, or at least architect around planned extensions.

Saturday, October 09, 2004

Poll #2: What are you WS-Security plans?

So the WS-Security specification has been published for sometime now, we also have the WS-I Security Profile to encourage interoperability. Tool sets are begining to support the specification. All that's left is for people to actually begin using it.

This weeks poll asks that very question. However it would be fantastic if any of you who have or are implementing WS-Security would share some of those experiences. Please post any comments you have on this subject, they are always welcome.

Friday, October 08, 2004

Developer Support #3: WS Spec Landscape

I was browsing the Apache Axis Wiki and came across this page which is a list of links to Web Service specifications.

Apache Wiki: WebServicesSpecifications

Very similar to the one I posted a couple of weeks ago, just a bit easier to read.

Also, there is a link at the bottom of that page to a WS Specifications timeline which is really quite nice.

Web Services Timeline

While browsing around I also came across this article Constructing Reliable Web Services on the IBM site. This is most interesting as they talk about the WS-Reliability specification, which seems strange as IBM back the WS-ReliableMessaging specification along with Microsoft.

Thursday, October 07, 2004

Amazon Web Services - Where might it head?

Amazon recently announced the update to their incredibly popular Web Services facilities.

"New Amazon Web Services Offerings Give Developers Unprecedented Access to Amazon Product Data and Technology, and First-Ever Access to Data Compiled by Alexa Internet"

This has got me thinking about what Amazon have done and where they may be heading with these. Now I need to preface this by saying that this is purely speculation on my behalf, I have no knowledge about this stuff. These ideas are just my thoughts based on what I would do in their situation.

So far they have never charged for access to these services, however the second part of this announcement, about opening up the Alexa Internet data, does say that this will be free during the beta phase only. This poses interesting questions about how they will charge for it and what services they will offer for that. So far Amazon have not offered anything in the way of service guarantees [Web services without warranties - LooselyCoupled.com].

What would you want to see from them if you have to pay to access the Web Services? Well for me it would be a Service Level Agreement for a start with some indication of how to best support this. Obviously if you read the site regularly you'll see the next part coming. I want Reliable Messaging of some sort. Obviously not many people would be able to implement this yet so you are talking about different endpoints with different facilities (one with RM one without) therefore Amazon should offer the one without RM with a lower SLA.

Moving forward though, what else might happen at Amazon. Web Services technologies are maturing and they should be beginning to think about what these changes will allow. In v3 they opened up the shopping cart through Web Services, really all that's left is the very end of the process with user services and card payments.

The technical problems restraining this were a lack of reliability (solved with WS-RM or WS-Reliability) and security (solved with WS-Security and WS-I: Security Profile). With these out of the way, they could move forward.

Obviously the card payments cannot be opened up to everyone so this suggests the split of Web Service users into Tier One and Tier Two. The current users and systems are what Tier Two would be, simple, open Web Services. Tier One users would be strategic partners who want to completely brand the entire shopping process. There would have to be serious business agreements in place, as well as SLAs. The experience that Amazon will be gaining through the paid for Alexa services will set them up for this.

This would be a major step forward for the widespread use of Web Services in major business mission critical infrastructures. This time next year possibly?

Tuesday, October 05, 2004

Developer Support #2: XSL References Links

I do a lot of work on building solutions around XSL and have a set of reference links that I live by so I thought I would pass these along. These reference sites are the ones that I always give to new developers or design partners starting out in XSL.

ZVON XSLT Reference

ZVON XSL:FO Reference

XSLT Questions and Answers

Standard XSLT extensions, useful until we all move to XSLT 2.0

Poll #1: Which IDE - results so far

I'm going to leave the poll up for the week, but the results so far look like this.

1) Eclipse/IntelliJ = 45%
2) JDeveloper = 4%
3) Netbeans/JBuilder/Websphere Studio = 2%

I'm glad Eclipse is right up there, makes our decision to go for it seem okay. I am truely interested in checking out the latest version of JDeveloper. Might take a copy home and give it a go there.

Anyway, I will update you with the final results before I close out the poll at the end of the week.

For next week I'm thinking about something to do with Web Services adoption. Thanks to everyone who expressed an opinion and I will set up some form of archive for these next week.

Monday, October 04, 2004

Apache proposal for WS-AtomicTransaction et al

There is a proposal
over on the Apache Web Services -WSFX Wikki for an implementation of WS-Coordination, WS-AtomicTransaction and WS-BusinessActivity.

From what I can tell this is from the same people who started the Sandesha project which is implementing WS-ReliableMessaging. They put an emphasis in their proposal to support interoperabillity. It would be good to see this get through, although I'm interested in reading the Architecture guide and the User guide more fully to see how easy this will be to use.

I guess the WS-* stack is moving forward whether you agree with it or not.

Which IDE do you use?

I was reading a review of Oracle JDeveloper 10g today, and I was shocked that I didn't know that this was being released for free. It would appear that I am that out of touch with such things. At work I am very happy with Eclipse, more so after the move to v3, but should I be.

I've heard great things about IntelliJ, but never tried it? So I would like to know what you think? I've added the poll you can see on the left, if you are going to select "Other" please leave a comment about which "Other" that is, or in fact a comment about one of the IDEs.

Thursday, September 30, 2004

Developer Support #1: IT Conversations...

My God, how can I not have heard about this site before. IT Conversations houses a vast archive of quality interviews with IT industry profressionals and audio recordings of major events, such as the Open Source Convention.

In the last few days, while trudging through some really boring work, I've listened to 8 of these interviews, and while it's true that some are more interesting that others, I've found something interesting in each one.

I highly recommend the site. I will be making sure that all our developers know about it.

IT Conversations

Wednesday, September 29, 2004

Warning about Axis and Saxon before v7.2

This was a weird one that caught us earlier today. One of our developers had implemented a local preview of pages in our client application. In the end this ran an XSLT over some content XML on the local machine, spat the resulting HTML out to a file and opened it in the registered browser.

He was getting different results from those when the processing happened on the server side, and it was also different if he ran it in Eclipse or via Java Web Start. While initial reactions are to jump to some sinister conclusions it turned out that on the server, and in his Eclipse project, he was using Saxon (v7.0 which will become important as you probably guessed from the title!), yet in the JNLP file for the Java Web Start version there was no Saxon, so it was using the JRE default of Xerces.

Now Xerces and Saxon have been know to behave a little differently when it comes to XPath short hand (e.g. match="/Bob/Sarah" as opposed to match="/child::Bob/child::Sarah"), so this wasn't unexpected. So he put Saxon into the JNLP file and uploaded the appropriate Jar.

Mere moments later cries of anguish went up from another developer who had loaded this version of the client through JWS and something important hadn't worked.

As mentioned in previouse posts, the main protocol between our client and server is WebDAV, however for things that are in no way covered by this specification we implement Web Services. There is one which checks some user details and this was failing, at least as far as the application was concerned.

When doing development work we always run the connection through the Apache Axis projects TCPMonitor programme. So any time there is a problem such as this we can look at the HTTP messages flying back and forth and see what's going on....actually so we know who's to blame, me for the client side or another guy for the server side.

After looking at the TCPMonitor output the SOAP request was fine, it had been processed by the server fine and then it had returned a correct SOAP response.

Turns out that Saxon, before v7.2, was packaged with the Ælfred XML parser, which Axis, so it seems, cannot use correctly to parse incoming responses. An quick upgrade to Saxonv7.2 and no more problems.
Lessons Learned #1: JComboBox is actually quite complex...

I talked a little while ago about taking an opportunity to rebuild our ResourceTree component, which is a reusable tree view over resources on our server. I talked about extending this into a FormResoruceTree which added checkboxes or radio buttons to each node. One place that the plain ResourceTree is used is in a JComboTree component. This is a JComboBox like component that contains a ResourceTree in the drop down instead of a list of options.

I first built this component months ago. We had a requirement to be able to select a single resource in forms, but we had limited space to layout in. A temporarily display ResourceTree seemed to be the way to go and my project manager being the guy he is(*) he "suggested" it should work like the JComboBox.

It took me almost a week to get this working. I had thought it would be quite quick. Turns out JComboBox does some interesting things to be able to display the drop down over all the other components, even over the edge of the main window.

What I produced in the end was a JPanel containing a JLabel and a JButton. The JLabel holds the selected value and the JButton opens the drop down with the ResourceTree in it. The drop down of course is no such thing, it's a floating borderless JWindow, which is aligned with the bottom of JLabel and JButton to look like a drop down.

The JWindow is hidden when a value is selected from the tree, or if the JButton is pressed again. The problem with this has been that if you don't close the JWindow, by selecting a value or pressing the button, the it will remain floating in space.

Obviously this has been reported as a bug...many times.

After creating the FormResourceTree that I talked about before, my project manager suggested that I use this in the JComboTree, especially where we allow multiple selections within it. This didn't take too long to do, and removed another of the many JTree implementations used throughout the code. He also suggested that I fix that "annoying" floating window problem, so that it disappears when you click away from it, just like a proper JComboTree.

So how can you do this with any certainty? If you are anything like me the first two things you will think about are that the solution is either something to do with FocusListeners or MouseListeners.

Well I'll tell you now that it's nothing to do with MouseListeners. What would you put them on? Everything else in the programme? No, nothing to do with mouse listeners.

The FocusListener is a good start. Listen for FocusLost events on your JWindow. This works to a degree, as focus in Swing is defined as following keyboard entry focus. So this will work for the cases where the user moves onto another component with some keyboard entry features. However if your JComboTree is in a tabbed pane and the user switches to another tab, this apparently doesn't count.

So where do you look for a solution. Well my next step was to realise that JComboBox works, and therefore I should check out how this does it. A quick check in the JavaSDK source later and I had the solution. AncestorListener. I'll admit I'd never heard of this before. According to the JavaSDK JavaDocs it is this;

"AncestorListener Interface to support notification when changes occur to a JComponent or one of its ancestors. These include movement and when the component becomes visible or invisible, either by the setVisible() method or by being added or removed from the component hierarchy."

This is nearly perfect. After implementing this our pesky JWindow disappears in almost all cases, at least I can be sure that it will go eventually. People seem a lot happier now in any case, and that will do me.

So what are the "Man at arms" style morals from this tale. Firstly Swing components can often be far more complicated than you realise until you try to replicate their functionality. Secondly, keep the JavaSDK source close to hand, it can often provide inspiration, or at least explain what's going on under the hood.

* Just to explain, my project manager on this project has very strong views on how interfaces should work. Often this is far from the accepted norm, and therefore what is directly supported in frameworks like Swing, but almost always the right thing to do. Hence why I've spend most of this project extending or replacing Swing components.

Friday, September 24, 2004

Checkboxes in a JTree.

I've been having fun over the last couple of weeks getting to tidy up some of the code that I created a long time ago. For the administration client application I had gone through several designs for a standard tree view of resources, based on the Java JTree class.

By the end of the project I had about five different versions, all sharing different bits of code, for example some had their own specific TreeNode implementations, most shared a TreeCellRenderer etc.

I had to go in and fix a bug in these trees and thought I would take the opportunity to rationalise all this work into one standard tree component and an extention of that which turned the tree into a form, with checkboxes or radio buttons either at the leaf nodes or on all the group nodes.

So I created a new TreeCellRenderer which created a TreeCell (one of my classes that extends JPanel) for each cell. In the TreeCell were the usual things, but also either a checkbox or a radio button. This all worked fine in a tree where you can only select the leaf nodes, however in a tree where the leaf nodes are not display and group nodes can be selected I ran into trouble.

You see, I could not use a mouse listener on the checkbox, or infact listen for events from the checkbox. I am guessing that the JTree has some glass pane over it thus preventing this from working. So I extended MouseAdapter, put that on the tree and got the tree row from the mouse click co-ordinates, thus giving me the TreeNode which I could now inform to de/select its checkbox.

All seemed well, except in some strange cases where clicking on the plus symbol to open a part of the tree would pass the mouse event down to the TreeNode. This would only happen when the group node in question didn't actually have any sub-groups underneath it, thus the plus symbol dissappeared.

It took me a long time to find the solution to this, I tried screen co-ordinate translation (note don't ever try to get this working with tree cells!). I added a tree selection listener, but of course the selection this is talking about is nothing to do with my checkboxes. I could select a node, but then not unselected it, because the underlying node was still seen as selected and so the event would not get passed to the listener.

In the end I had to create a class which implemented TreeSelectionListener and extended TreeSelectionModel. This picks up the tree selection ("valueChanged") event, finds the node from the selected TreePath and tells it to toggle its checkbox/radio button. Then I call this.clearSelection() to the super TreeSelectionModel. Unfortunately this then triggers another "valueChanged" event, so I had to add a re-entry boolean to stop this from happening.


public class TreeMouseListener
extends DefaultTreeSelectionModel
implements TreeSelectionListener, TreeSelectionModel {

private boolean m_bReEntryStop = false;

public void valueChanged(TreeSelectionEvent tse) {
if(!m_bReEntryStop) {
TreeNode node =
(TreeNode)tse.getPath().getLastPathComponent();
TreeCell cell = this.m_tree.getCellForNode(node);
if(cell.isEnabled()) {
cell.mouseClicked(null);
m_bReEntryStop = true;
this.clearSelection();
m_bReEntryStop = false;
}
}
}
}

Thursday, September 23, 2004

Gartner update on major Web-Servives-Enabled software players

Spotted this analysis of the market, it has some nice thoughts on the different approach to WS that these people are taking and their involvement in standards work. It's a bit of a shame that the entry point is capped at >$500million in revenue, but there is an honorable mention for one company that didn't make this cut. Worth a read.

Magic Quadrant for Web-Services-Enabled Software, 3Q04

Tuesday, September 21, 2004

Load testing surprises...

We've been doing a considerable amount of load testing on the new server code recently and we've had some shocking results. The main one was a concurrency with our caching classes. We had implemented a removal queue in which objects would rise and fall based on usage. Turned out that there were some serious locking problems going on here. One of our developers reimplemented it using Doug Lea's packages util.concurrent Release 1.3.4. and the LRUMap and we've gained a considerable amount of performance.

Moral of the story, check this stuff earlier.

Monday, September 20, 2004

More debate about the WS-* stack.

The recent flood of blog postings about the viability of the WS-* stack specifications has mostly stemmed from Microsoft's recently published whitepaper An Introduction to the Web Services Architecture and Its Specifications. I've read the paper now, and generally think it's a very good overview of the MS/IBM Web Services Architecture, however there are a few points I feel I need to raise.

1) This is the MS/IBM camps Architecture, and therefore this whitepaper must be read within that context. There are open standards, from W3C, OASIS, IEFT etc, talked about in there, but also there are a lot of specifications that have not been/are no where near submission to a standards body.

I think it is a missed opportunity for Microsft not to have made this more clear. If they had they may have managed to control the resulting debate to being about why these specifications are important and how best we, as a community of developers, can bring them about.

Worst yet, they present specifications here that have direct competition from open standards, for example the MS/IBM WS-ReliableMessaging which is basically the same as the OASIS WS-Reliabillity specification.

That said, read in this context this whitepaper is a great introductory article to the state of one Web Services Architecture and will set you thinking about a more general one.

2) After reading some of the follow on posts, for example "More WS-* specs, more questions about architectural viability" I can understand some of the problems people are having with this. Of course people are not going to see reason for all these specifications if they are not now, or probably never will be, interested in using them. However I don't agree with the following.

"Tim Bray's point about Amazon, eBay, etc. not needing the WS-* stuff to get their job done is well taken, but it's also quite clear that these were built from the ground up to work with the Web, whereas the fertile ground for WS-* are the enterprise systems that were not designed with the web in mind."

Amazon, eBay and the rest are at the vangard of such large Web Services implementations. I am sure they will have long term strategies which they are not sharing. But the services they have exposed so far are not critical services. Amazon does not come off too bas if an associates site fails to load a page, and they are not yet allowing seemless purchasing from associates. For this to happen specifications such as WS-Secutiry are essential. One of the Reliability specifications/standards will also be required if they are to puch the purchasing services this far out.

This is only the start, large corporate associates would probably wich to implement something like WS-SecureConversation to save the repeated single shot security problems with just having WS-Security.

Of course Amazon, eBay and the like have been happy with only SOAP and WSDL, because that's all they had, but I wouldn't be surprise to see them picking up on other standards and doing some interesting things with them. To address the second point in the quote, yes they were designed for the web, and yes enterprise systems will need some of these specifications, however I would be willing to bet that Amazon see their systems as "enterprise".

Also are we going to throw out all the ideas and experience from these "enterprise" systems? Currently most of the Web Service implementations out there are simple point-to-point systems, there are few, that I know of, compelex networks of services with many intermediaries and compound services which go on to trigger many more. These types of system will required the more advanced WS-* stack specifications. So let's start getting them ready now.

Friday, September 17, 2004

The state of the WS-* stack

If you've been reading my updates you will know that I've been very concerned about the status of the WS-* stack (the set of standards that can be implemented on top of the basic Web Service standards of SOAP, WSDL and UDDI).

My major concern has been that you cannot currently describe a service that utilises any more than the basic standards, my main case in point was that there is no standard way in WSDL to describe the WS-Security requirements of a service.

Since those recent postings, and the release of the WS-Reliability first draft, I've been catching up on the state of the various WS-* stack standards. Here is what I've found, some of which is definately not good news.

Firstly, here is a pretty good overview of the various standards:

The Web Services Protocol Stack

Next, going back to the WS-Security example that I have talked about before. My problem with this was that there appeared to be no way to describe the WS-Security requirements of a service in the WSDL file, and therefore no way to programatically create clients for that service without some form of human intervention.

After doing some research, which I should have done before, I now understand what is going on. I think my problem was in the fact that WS-Security is now an OASIS standard, but it had originally come from the Microsoft/IBM camp. In the MS/IBM camp, the prefered way of dealing with programatic descriptions of WS-* stack requirements is place the information into WS-Policy xml.

WS-Policy is simply a framework to contain policy assertions, these are standard specific and are described in other documents (for example the WS-Security Policy standard).

WS-Policy in itself does not describe a way of attatching the policies to service descriptions, this is dealt with by the WS-PolicyAttachment standard. In this standard there are descriptions of how to attach WS-Policies to both WSDL and UDDI.

Now I like the WS-Policy framework, I think it is nicely lightweight in its core specification, nicely extended for the standard specific assertions and I also like the separation of the attachment details. However this does not solve the WS-Security issue.

You see WS-Security is now an OASIS standard, but WS-Policy has not yet been submitted. So while we are all agreed on how to implement Web Services security, we have no (independantly standardised) way to attach these requirements to our service descriptions.

I need to do more thinking about this, but I am wondering where we go from here. In my mind I am formulating what I see as the WS-* stack Base Implementation, the minimum set of standards that need to be implemented before Web Services become truely usable for industry, and the more I look into it, the further away this goal becomes.

Anyway, time for Friday drinks in the pub. I will post more on this later, including any findings on the MS/IBM camp WS-ReliableMessaging standard verses the OASIS WS-Reliabillity standard.....surely things can only get better!