Saturday, December 30, 2006

Requirements elicitation

The ways and means in software design, build and test has progressed in leaps and bounds, in recent years. But when it comes to requirements management the IT community is still stuck in stone age. The formal approaches have been tried and have not worked tremendously well in business IT scenario. In one of my earlier posts (Requirements management is crucial) I have tried to analyze why a formal approach may not work well with business IT systems.

That being said, even in a semi formal ways and means that we use for requirements elicitation and management, we can do much better. For example, in my current project one of my business analyst (BA) was complaining about how users are inept at giving requirements. I have heard that umpteen times till now. What we in IT fail to understand is that, business users and IT are two different cultures trying to speak with each other. Not only cultures are different, even languages are different.

My BA holds a workshop to elicit requirements and use cases are his means. But business users concerned, have never heard of the use case terminologies. They cant specify their requirements in a structured way the use case expects them to. It will help them tremendously if we tell them beforehand what we expect from them, in what format. May be a mock excercise or a working example can do the trick.

Another thing we fail to understand is that there are multiple types of requirements, and use case focus on requirement tend to combine them together. There are strategic and policy level requirements, which would be common across multiple use cases, whereas operational requirements will be specific to use cases. When we mix all these together in a use case, we are more often than not going to get requirement changes, because users empowered to specify these different requirements are different. When operational user speaks on behalf of policy makers, he is making a mistake. And we in IT are encouraging him by pressurising him to specify his requirements faster!

We need to have better mechanisms to handle this than plain use cases. I sincerely wish there be more tools in this space to help my BA and users alike. Till then I have to devise manual means to identify operational requirements and strategic requirements, and manage their life-cycles seperately. It will then be clearer with my BA, why is he getting so many requirement changes.

Wednesday, December 27, 2006

Enterprise architecture in MDA terms

Recently one of my colleagues quipped about UML being nothing but pretty pictures. But at the same time he wanted to use MDA for EA. He pointed this documents as a good start.

I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.

I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.

MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.

Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.

I find it good that interest in MDA is reinvigorating.

Thursday, December 14, 2006

Services are not ACID

ACID stands for atomicity, consistency, isolation and durability. Any transactional system needs to provide these properties. In file based system, it was the problem of designer and implementor(coder) to arrange for these properties. Then with advent of RDBMS these responsibilities were taken up by RDBMS. Again, in early days of distributed computing, the responsbility (of two phase commit) fell on designers/coders. Then came XA open architecture and introduced transaction co-ordinator in every RDBMS. Life became easier after that (from ACID perspective).

ACID is critical for transactional systems. Services in SOA do not guarantee ACID properties. They require compensatory services to achieve ACID effect. Those of you, who have worked with 'file' based information systems and then moved onto work with RDBMS, will understand my concern about services not being ACID, more. Because in a OLTP world, a transaction followed with some delay by a compensatory transaction, is not exactly equivalent to a ACID transaction. This may not affect in most cases, but when it does, the effect can be pretty nasty. Especially when a service executes an automated protocol in one service call and then try to undo its effects using a compensatory service. This kind of thing happens in a straight thru processing scenario. This remains one of my major concerns.

SOA, funding and hidden gems

There are some good posts on services and their characteristics. The main thought of these posts, is to bring out characteristics of an enterprise service. There is a comparison between city planning and SOA. A thought encourages different granularity of services to co-exists with an anlogy to niche retailers v/s Walmart.

However the way projects are funded in enterprise IT will hinder this approach. Enterprise IT, as far as funding goes, is governed more like a communist polit-buro than a capitalist entreuprunial system. There are no venture-capitalists ready to fund (seemingly) wild-vackey ideas. They would rather go with proven ways of doing things. So the idea of having niche services co-existing with run-of-the-mill enterprise services is a non-starter. That does not mean it will not happen.

Traditionally departments moon-light on their budgets and create fundings for these niche capabilities (albeit not in services form), but then those capabilities remain outside perview of enterprise architecture and remain in back-alleys, hidden in enterprise hiddenware. Which also prevents proper utilisation of these capabilities. Same thing can happen in an SOA. Departments will create niche services, and somehow fund them. But these services will be below the radar for rest of the enterprise.

An enterprise architect, has to live with this fact of life and provide means to unearth such hidden gems and bring them back to EA fold for governance. As mentioned in the posts mentioned above, a collection of such niche services may be a viable alternative to a coarse grained enterprise service, only if we know such niche services exist.

Solving the funding issue, to borrow a term from Ms. Madeline Albright,is beyond payscales of most enterprise architects and best left to business leaders to figure out!

Monday, December 04, 2006

SOA and demonstrating ROI

Typically large enterprises in BFSI space have a conservative cost benefit accounting practices, focused on accountability. Within these enterprises IT departments are often flogged for not achieving ROI. This also makes it a major area of concern for an Enterprise architect.

In 2007 trend analysis by IT experts, one of the major trends mentioned is about difficulty in demonstrating ROI. Traditionally demonstrating ROI was a problem in IT shops. With stovepipe architecture, at least you could attach your costs to relevant information systems and could figure out benefits accrued because of that information system. Whether it made sense or not, to have particular information system was obvious. Of course problems related to intangible benefits accounting was always there.

The situation became difficult with sharing of infrastructure be it client server systems and lately with EAI. However with SOA gaining traction the issue of demonstrating ROI is becoming even more difficult to handle. There are several issues, ranging from, when a project considered delivered, how costs are measured, apportioned and paid for, to what constitutes benefit and how to measure it. Most of the time IT folks are on defensive, in these debates.

There are problem with measuring both, the fixed costs incurred while building these systems as well as running costs incurred while running these systems. On fixed cost front, the problem arises mainly because of the reuse of infrastructure and non-functional services in a seamless manner. Please note this is different than any sharing of infrastructure you may have seen earlier. Earlier even with sharing, there were clear cut boundaries between users. Now with SOA, the boundaries are blurred, as far as infrastructure and non-functional services goes. One does not know how many future projects are going to use these services (with or without refactoring). Hence they have no clue how to apportion the fixed costs to users. This sometimes turns projects away from using the common infrastructure, as the up front costs are deemed too high. If yours is the first project you rue the fact that you have to build the entire infrastructure and own up all the cost. The enterprise needs to have a clear policy for these types of scenarios so that projects know how cost is going to be charged.

On running costs front various cost elements, e.g. ESB, network connections, underlying applications etc. need to be metered at the granularity which makes sense for the accounting purposes and which helps tagging metered data with a user. Here user is meant from a cost benefit accounting perspective and not actual business user. This metering is not easy. Most of the times underlying IT elements are not amenable to be metered at the level of granularity which helps you connect the metered data with users. Sometimes, metering at such granularity adds an un-acceptable overhead so as to breach non-functional requirements. I have not seen any satisfactory solution to this problem. People have used various sampling and data projection techniques. These are unfair in some scenarios and costs get skewed in favour or against some information systems. The applications part of this run-time metering is relatively easy but it still has problems of adding overheads and breaching non-functional requirements. So people use sampling and projection techniques, here too. But luckily there is not much seamless reuse of application services hence these sampling does not skew costs drastically.

As for benefits the debate is even more intense. The debate begins with the definition of when the project is deemed complete and starts accumulating benefits. e.g. In a business process automation project using SOA, with iterative delivery, one may automate some part of business process which results into some benefit, but end to end business process may actually suffer during this transition, because it may have to carry on with manual work-around. So how does one measure benefits in this case? With intangibles, there is a perennial problem of how does one measure intangible benefits? Sometimes even measuring tangible benefits is difficult, because the data for comparison is unavailable. As with costs, to measure benefits one needs metering at various levels for measuring elements of benefits (e.g. time, resource usage etc.). All the metering issues faced by cost measurement are also faced by benefits measurement as well.

So the key problem is working out semantics of costs and benefits with the business folks, putting a programme in place for their measurement in conjunction with SOA. If SOA is combined with a measurement programme, then it may be possible to demonstrate the ROI with these agreed definitions. This measurement programme is peculiar in the sense that for deployment it can be clubbed with SOA but it has its own separate governance needs, apart from SOA. So it needs to be handled appropriately. This is more than BAM and covers even IT aspects too. May be we need a new acronym, how does BITAM (Business and IT activity monitoring) sound?

Saturday, December 02, 2006

Agile methods to kill outsourcing? I dont think so.

I hear an industry thought leader vigorously promoting Agile Methods. Nothing wrong with that. Only problem is there is an underlying thought which I dont agree with. The thought being Agile Methods can kill outsourcing. I would like to provide a logical argument to support my opposition to this thought.

For this we need to understand the philosophies behind getting a job done. The two prominent schools of thought are Adam Smith's one and Hammer-Champy one. Adam Smith had proposed a division of labour centric approach. To get any job done, the roles were clearly segregated. Each role needed to have specific skills. The other inputs apart from the skill that particular role needed, were provided by roles above it. The system was made up of a lot of specialists. Each specialist made only those decisions that were confined to its role and sought the decisions beyond their own role to be made outside, irrespective of its effect on the job at hand. (any government office anywhere in the world,is a good example of this approach, so are some public sector banks in India).

It worked fine for lot of years. But like any system, it developed abberations. The big bureaucrocy that these division of labour created started having bad effects on working of systems. Thats when Hammer-Champy came up with their revolutionary ideas on business process re-engineering. They proposed systems with generalists instead of specialists. These generalists were to be supported by enabling tools, such as improved IT systems, better collaboration tools such as fax, telephone, e-mails etc. They made multiple decisions, irresepctive of specilaization they had. They used enabling tools to make those decisions and in rare cases when they could not make those decisions, the job was transferred to actual specialist. This mode of getting job done has caught on and examples are everywhere to see. (Any private bank in India or anywhere in the world is an example of this, where bank teller supports all the functions from opening bank account to withdrawing cash).

This approach works, because it is only rarely one needs services of true specialist, as compared to, a generalist supported by enabling tools. One can see parallel between these and the way software development itself is evolving. The traditional SDLC with BDUF (big design upfront) follows a smithian approach. It expects a lot of specialists to collaborate to develop software. It needs a very heavy process support. That when you have ISO/CMM coming in.

Whereas Agile method appear to follow a Hammer-Champy approach to software development, with a slight variation. It relies on a multi-role specialist, instead of generalist. These multi-role specialists perform multiple roles themsalves. They are specialists in these multiple roles (either because of their training or experience or both), hence they dont need either a big process support or support from a lot of other specialists. The people who think this can kill outsourcing appear to base that conclusion on following logic. Since multi-role specilaists are in short supply and difficult to create, the outsourcers cannot have enough of them. Hence outsourcing will stop. Thats what the logic appears to be.

But as I had discussed in one of my previous posts about innovation shown by outsourcers, this one too can be handled innovatively. One can always replace the multi-role specialist with a generalist supported by enabling tools and achieve same result, as originally envisaged by Hammer-champy. One cannot beat support systems provided for such a genralist by large outsourcing organisations. The large outsourcing organisations have benefits of sharing humungous amount of knowledge, which even multi-role specialists dont have access to. So agile methods should not be viewed as just an antidote for outsourcing.

As outlined in one of my earlier posts, both these approaches (viz. agile and traditionla SDLC) are valid and are valuable in different contexts. The IT leaders need to choose appropriate methods based on their needs. As an Enterprise Architect, its my worry to provide appropriate governance controls in an uniform framework which works for both these approaches. It is of vital importance that I put these governance controls in place so that the roles make only those decisions they are empowered to make. Because it is very easy to confuse role boundaries in these two drastically different approaches.

Thursday, November 30, 2006

No stereotyping please!

Long time ago I was a starry eyed (bit of exaggeration here) entrant into world of IT, when the IT revolution in India was about to begin. I was part of elite 'tools group', using translator technologies to build home grown tools for various projects that used to come our organisation's way. Amidst all those small projects a big depository from western world developed enough faith in us. It asked us to develop their complete software solution. The visionaries from my organisation did not do it in normal run-of-the-mill way. They decided to build home grown code generators, to insure consistent quality and created a factory model of development. I was one of the juniormost member of the team which built and maintained those tools.

Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.

During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.

It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.

And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).

Monday, November 27, 2006

SOA in enterprises or Hype 2.0

If dot com in enterprises was hype 1.0 then surely SOA in enterprises is coming very close to becoming hype 2.0 . The way SOA has been touted as next best thing to happen to mankind since sliced bread brings it closer to that dubious distinction. The vendors are promising all kinds of things from flexibility, adaptability, re-use to lower costs if you use their merchandise to do SOA. SOA is good as long as decision makers can seperate hype from reality. I for one will be very saddened if SOA goes the some way as dot com hype. Following discussion is to seperate hype from reality so that decision makers have correct expectation, to enable them to move along the path of sustainable SOA.

1. Myth of reusable services

In my experience as architect I have never seen as-is reuse of a business service implementation. Some amount of refactoring is needed for it to be reused. The refactored business service actually harbours multiple services under a common facade. For a service to be as-is reusable it needs to be so fine grained that it will have problems related to non-functional attributes of services. Just to give an example, if I had a business service providing customer details along with his holding details given a customer identity, then I have couple of options in its implementation.

I) I can build it as a composite service composed of more granualar services for customer detail and holding detail.
II) I can build a monolithic service for providing both customer and holding details

Now remember the lesson we learnt in managing the data. Always do the join at the source of data, because at the source you know more about actual data and can do many more optimisations compared to away from source. (Remember the old adage don't do join in memory let RDBMS handle it?). So from a non-functional perspective (scalability and performance), option II) is very attractive and some times mandatory.

No doubt, option I) gives me more re-usable service. But it still does not give me absolutely reusable service impementation. For example if I need the customer details with holding details for three different kinds of execution scenario, viz.
a) an on-line application for customer service,
b) a batch application to create mass mailer and
c) a business intelligence application to understand customer behaviour (with holding as one of the parameters).

Even though I have more granular services, all of them are not usable in all these different execution context. I cannot simply call the granular services in a loop to get the bulk data needed for scenario b) and c) above. So the re-usability is restricted by execution context.Of-course you can throw hardware at this problem, to solve it. But then your costs escalate and any savings you made by reusing software will be more than offset by hardware costs. So just because you organise your software in terms of services (which essentially specifies the contract between user and supplier and nothing more), you are not going to get re-usability. It will enable re-usability within an execution context but not universal re-use. So if we treat Services as explicit contract specification between users and suppliers then we should attempt to reuse these contracts. This however does not automatically translate to implementation reuse.

2. Myth of composite applications

This myth is related to the myth above. In most other engineering disciplines, the real world components are standardized and higher level solutions are typically component assembly problems. Not so in software. Even if we have services, their assembly does not necssarily behave within accepted parameters, even though a single service might behave OK. So composing implementations, to arrive at a solution is not so straight forward. Many vendors will have you believe that if you use their software, most of your software development will reduce to assembly of services. This is not true for following reasons. What is the correct granularity and definition of services is known to user orgnisation than vendor. These service defintions are dictated by user organisation business practices and policies. Each organisation is different, so a vendor can never supply you those service definitions. If a vendor does not know how the services look like and what their properties should be, how on earth is he going to guarantee that composition of such services will behave in desired manner? And as outlined in point above, the implementation reuse is a big problem. So even on that front vendors can not help you. So the composite application will remain a myth for some time now. The vendor sales and marketing machinery will show you mickey mouse applications built using composite apps scenario. But demand to see atleast two productionized composite apps, where majority of constituent services of apps are shared between those two. My guarantee is, you wont find any.

So is SOA a BIG hype about nothing. Not exactly. It does provide following benefits.

1. Manageability of software with business alignment

The single most important contribution of SOA is that it connects software with business. In an SOA approach, one can make sure that all software is aligned with business needs, because all software is traceable to their business needs. The whole edifice of building, maintaining and measuring utility of software will revolve around business services in an SOA approach. So it becomes easier to maintain focus on business benefits (or lack thereof) of software. With the traceability it provides, software becomes a manageable entity from being an unwieldy and randomly interconnected monolith. And there is some reuse possible in terms of non-functional services (such as security, authentication, personlisation etc.).

2. Ability to seperate concepts from implementation

The next important contribution of SOA approach is the focus it brings on seperating interface from implementation. The logical extension of this approach is to seperate conceptual elements from platform elements. So if you are using SOA approach towards software development, you have necessary inputs to create a large scale conceptul model of your business. You just need to filter out platform specific stuff from the interfaces you defined. You can further distill these interface specifications to seperate data and behaviour aspects. These are really reusable bits within your business. It is very easy to figure out how exactly these reusable bits can be implemented on different implementation platforms. This will give you necessary jump start for your journey towards a conceptual IT world.

So in my opinion SOA is good and it is the way to go. But not for the reasons stated by vendors. It is not going to make software drastically cheaper nor going to make software development drastically faster. Its just a small step in a long journey towards making enterprise software an entity managed by business folks rather than IT folks.

Tuesday, November 07, 2006

Agile, Iterative or Waterfall?

There is been a lot of interest and mis-conceptions about various life cycle methods for solution development. Please note carefully I am saying solution development and not software development. Enterprises develop solutions to the problems. The software content of the solution is developed by IT sub-organisation. The rest of it is assigned to different sub-organisations within enterprise. So when we discuss software development life cycle methods (I'll use short form SDLC henceforth), we must remember solution development lifecycle methods (I'll use SolDev as short form, henceforth) as well. A software development and deployment method has to synchronize with solution development and deployment method.

There are various SDLC methods in vogue. Waterfall method has been in use for ages and has its supporters and detractors. Iterative methods originated some time back and are in use in many enterprises. Agile method is the newest kid on the block and yet to make serious inroads into enterprise IT scenario.

Waterfall is a sequential method, waiting for previous phase to finish completely and expects it to deliver a signed and sealed deliverable. This deliverable is enhanced in the next phase till software gets delivered. It assumes that the requirements are well understood and wont change during software development. It is most risky of development approaches and has quite a large failure rate.

Iterative method is iterative as it's name suggests. It creates a initial, fully functional version of system and iteratively adds functionality to it to make it complete. During each iteration it also takes into account user's feedback for the earlier delivered functionality and corrects the implementation.

Agile method is a more aggressive version of iterative method, where timelines are shorter and sacrosanct. It also believes in face to face communication rather than written documentation.

Each of them has their own strengths and weaknesses. And whether to choose one over other is not a trivial decision.

A solution development method is normally iterative or sometimes waterfall but rarely agile. Normally solution development and deployment involve dealing with real life things and they are not as soft as software. That may explain why they dont use agile methods that much.

Typically quick-fix and operational solutions rarely involve a big solution design and deployment effort. Major effort is consumed in software development and deployment. Hence agile methods can be deployed as SolDev method. whereas tactical and strategic solutions involve a significant solution design and deployment effort so an iterative method appears a right choice for SolDev. Modern enterprises rarely use waterfall method as it is too fraught with risk. Again I am referring to intent of the solution and not the systems, when I say operational, tactical or strategic.

For example,

If you were to repair a leaking window in your house. You would call the tradesman, interact with him and get the job done in a day or two. You will give constant feedabck and get it done as you want. This is a quick-fix solution and agile method can be (so to say) SDLC method.

Whereas if you were to add a conservatory to your house, you may have to interact with lots of tradesmen (or you outsource to a contractor), you have to worry about changing furniture setting in your house and may have to change nature of the games in your kid's birthday party. Thats a tactical solution and can hardly be agile. You may iterate over the development of this solution, by first building the conservatory then adding the furniture and relocating existing furniture. You also have to think about new games to include in birthday party, which take advantage of the conservatory and furniture settings. Here actual building of conservatory is like building software and other things you do is part of solution development and deployment. Both these need to follow same life cycle methods otherwise you'll have problems. And agile method for both SDLC and SolDev wont work because you would not have bandwidth to support sofwtare development (i.e. building conservatory) as well as solution development (i.e. doing other stuff such as buying new furniture, relocating old one ). And just SDLC can't be agile because rest of the solution will not be ready anyway.

Same goes about building a new house altogether. Thats a strategic solution. and you would still want an iteartive solution. Build the basic structure of the house. Add all utilities, then interiors and finally finishing. Constantly giving feedback and checking for yourself how the house is getting built.

You were to do it in waterfall model. You would call in a contractor tell him what you want and hope he does it before your required date. Well, if it is something as standardised as house building and contactor is reliable you may consider this option.


So its quite clear that different life cycle methods are suitable for different kinds of SolDev and SDLC. They have their strenghts, but need to be deployed in right kind of scenario. An enterprise architect needs to define the decision framework for making this choice, within an enterprise.

To reuse or not to reuse?

As soon as I posted about reuse, a old colleague of mine did want to reuse a small piece of code developed by me quite a long while back.

It was nothing great. When we were attempting to make client server connection to CICS in good old days, we were hitting the limits on CICS COMMAREA. We thought if we compress our message, we would not hit the limit. Since it was over-the-wire message all content was required to be non-binary. So one place where we thought we can save space was if we packed integers into higher base representation, because those messages had a lot of integers. So a base 64 representation of integers would take 6 digits as against 9 in a base 10 representation, and still would be capable of going over wire. This piece of code was required to be as optimal as possible. So we had developed a function which would pack integers into base 64 representation. And we had used a trick to make it faster, by taking advantage of EBCDIC representation. It is part of the library we supply along with our MDA toolset.

My colleague wanted to reuse the code, albeit in a different scenario, as it was well tested and is in production for so many years. Needless to say he would have fallen flat on his face, if had blind faith in the code. It would have failed because it relied on EBCDIC representation and he was trying to deploy it in a non-EBCDIC setting.

Why am I narrating this story? It just remphasises my point about implementation reuse. Well, even with best of intentions and support, implmentation reuse is not as easy as it looks. My colleague was lucky to have me around. Who would think such 50 lines of code can go so horribly wrong in a different execution context. If we had seperated the concept from implementation in this case, and generated an implementation for my colleague's execution context it might have worked. But without that he has to do refactoring of code. Which may wipe out gains he may have received by reusing. I am not sure how I could have made that piece of code more reusable than what it is, without breaking non-functional constraint imposed on it by its own execution context.

Now with refactoring we could have a piece code which is more reusable than what it was, but my colleague would have to spend that effort to make it so. It depends whether he has that kind of investment available to him. And it still wont guarantee that it wont fail somebody else's requirement in a totally different execution context. It is making me more convinced that either have concept reuse Or be prepared for refactoring while reuse. And dont expect 100% reuse.

Monday, October 30, 2006

Dont build for reuse, reuse what you have built

All architects are very passionate about reuse. The reuse is one of the most admired and promoted principles. And often enough business sponsors of IT project seem to spurn the extra funding required to make ‘something’ reusable. That’s an irritant faced by most architects. So what is the problem? Why this seemingly smart person from business side cannot see the wisdom of building something for reuse and spend that extra cash? Is it possible that they are smarter than architects? Probably they know, by spending that extra cash they are not really going to get that reusable artifact.

May be we need to turn the argument on its head, to get the business sponsor’s perspective. Imagine an architect saying to sponsor, “Don’t give us extra cash so that we build something reusable, rather well save some cash by reusing some of the existing artifacts.” [That will be music to sponsor’s ear.] However it is a difficult statement to make.

An architect will protest, “How can we reuse something, which was not built for reuse? How do we know how well the artifact satisfies my functional requirements, let alone non-functional requirements? If I have problem in re-using it, who is going to help me out? How much additional effort I am going to spend in understanding and plumbing it with rest of my solution? Is that effort much less than the effort I would have spent in building it myself? Do I save on maintenance or am I doing ‘clone and change’ reuse?

If you look at it, most of this applies to something which was built for reuse too. How does one build something for reuse in a potentially unknown future scenario? With unknown functional as well as non-functional requirements? What organization (and of what size) one needs to support this reuse? Difficult questions! So being a pragmatic, I am inclined to give up the notion that something worthwhile can be built for as-is reuse in future, and tend to agree with those (smart!) business sponsors. Wait, before you declare me a traitor to architect community, I would like to state that some reuse is still possible.

What I believe is a reuse at conceptual level is still possible, and useful. So if we separate the concepts from implementations (a la MDA, CIM v/s PIM v/s PSM), we can reuse the concepts already built. And choose a more appropriate way, to connect with an available implementation, for the current scenario. What are these concepts? They can be anything conceptual, a business process, a conceptual data model, a set of business rules. They are closer to business requirements than IT design or implementation. Most probably you will use fragments of these conceptual elements than complete conceptual element. Does this need a full blown MDA tool set? No, not in my opinion. One needs to separate concepts from implementations using any notation that one is familiar with and have a repository of such concepts. So any new solution you are going to build, first step would be search in this concepts repository. You find something useful, (re)use it. The effort and organization needed for creating and maintaining such a repository is not huge. And of course you need to adopt a slightly different solution development practices to institutionalize this. [Well for a small consideration, we can help you there ;-) ]

So, don’t build for reuse, rather reuse what you have built!

Wednesday, October 18, 2006

Is single version of truth achievable?

Did you have a programme in your IT organisation to build 'the single view' of data for some subject area, say Customer or a Product? These types of porgrammes have different names Book of Records, System of Records, Single version of Truth and so on. Have you experienced the agony one goes thru to try and create a single view, acceptable to different view points that exists in an organisation? There always will be some view point, which would want some data at different level of granularity or different level of currency or both, from rest of the view points.

No I am not talking of CDI/MDM. The problem I am talking about is about defining what 'the single view' of a particular subject area should look like? Even after you decide how your single view of subject area should look like, there are further challenges of collecting, reconciling, cleansing and hosting data. Thats what CDI/MDM predominantly addresses. But who helps you in deciding what the right single view of particular subject area? Frankly, nobody.

So isn't a single view of a subject area, a misnomer? Let me make a logical argument. even when we build an IT system, we start with a nice third normal form data model. But the non-functional requirements, such as performance, scalability makes us abandon the third normal form data model and introduce level of redundancy, what we poularly call denormalisation. And we live with it.

Why not take a similar approach to data, at enterprise level. Lets accept the fact that, the different view points are not always reconcillable, and the single view of subject area is impossible to achieve. Why not build a fit for purpose single view of subject area. And there can be as many single views as there are different purposes. Some of the purposes may collaborate with each other and can reuse each others single views. So in reality there can be less number of single views of subject areas than the number of purposes. Ofcourse, you need to create mechanisms to keep these different single views in synchronisation. Well why create one level of indirection, why not go to multiple view underlying this single view directly? May be using a service facade? Well I have answered these questions in one of my earlier posts . so you would need these fit for purpose subject area single views. You can use MDM/CDI technolgies to build them.

And if you dont believe me then I have a bridge to sell you ;-)

Wednesday, October 11, 2006

Communism and enteprise architecture

It is important one learns how to solve common problems using patterns. It is also important how not to solve problems using anti-pattern. One of the greatest anti-pattern relevant for enterprise architecture, comes to my mind is Communism.

Communism is an ideology that seeks to establish a future classless, stateless social organization. There is very little for one to disagree with this noble goal. The problem arises when one tries to follow the migration path from 'as is' state to this nice and wonderful 'to be' state.

Enterprise architecture seeks to define similar nice and wonderful 'to be' state for enterprise IT and tries to provide a migration plan from seemingly chaotic 'as is' state. Therein lies the similarity and lessons for EA. What are the important lessons to be learnt from failure of Communism?

Centralised command and control alone cannot guarantee results

Enterprise architect sometimes try to rule by diktats. Thou shalt do this and thou shalt not do that... These setting of common principles are necessary. But failure of communism teaches us that a central politburo can command whatever it likes, but things need not happen on ground as per their diktat. At least there is no guarantee that spirit of these principles (diktats) would be observed. With just letter of the diktats enforced, expected results will not follow. So there should not be too much of dictating and whatever is dictated must be governed to make sure that it is followed in letter and spirit.

Evolution, rather than revolution, works

Communist ideologues decided that the current system is too broken to be fixed, hence they advocated a violent overthrow of current system and replacing it with a new (better/improved) system. such a revolutionalry approach did not work in practice. It is indeed nearly impossible to design a system from scratch as replacement of another working system and replace old with
new, in one go. It is more adivisable to chip at problems of old systems, replacing parts of it as we go along. It is also important to keep readjusting priorities as we go along, because nothing in this world is static.


Checks and balances are required in governance

Another ill effect of centralised command and control was corruption and general in-efficiency. The middlemen prospered without adding any value. So a proper set of decentralized checks and balances is absolutely a must for efficient governance. In enterprise IT world, business and IT folks exhibit certain amount of tense relationship. So EA must create a balanced mechanism where both sides are represented and heard, and decisions are made which are acceptable to both parties.Including all stakeholders, is a must for efficient governance. This would insure right solutions get developed and not poilitically correct solutions.

Stakeholders involved must buy into 'to be' state

All communist states had a significant number of capitalist who would never agree with 'to be' state, as stated by communist. So communists could never reach their desired 'to be' state, no matter whatever their migration plan was. Mind you these capitalists were not just ideologically capitalist, rather they had a stake in being capitalist. Pol pot, one of the communist meglomeniac, tried to address the problem by eliminating such dissenters on a mass scale. Even that did not work. So it is very important that a significant section of the bsuienss and IT organisation must buy into the to be state. Without which there will be too much friction and resistance to change. This needs to be remebered in conjuction with item 2 above.

People involved must be able to connect current happenings with 'to be' state.

The empty store fronts and long queues for daily essentials in communist states were not reconciling with tall claims of progress by central politburo. The communists did put man in space, and fired giant rockets. But where it mattered the most, daily lives of their stakeholders, they failed to deliver. Enetrprise architects also fall into similar trap. Having a set of principles, a town plan and what have you, will be taken with pinch of salt by common IT and business folks, unless you show the results on ground. Stakeholders must be able to connect things happening around them to the grand vision of enterprise architects, based on the results they have seen so far.

These are only a few of the lessons learnt from failure of communism, enterprise architects can keep learning from history of communism and make use of it as an effective anti-pattern.

Friday, October 06, 2006

Demand supply mismatch in IT shops

Demand management in enterprise IT shops is a perennial problem. The problem is actually of demand supply mismatch. There is a gap in the demand of qualified IT professional and supply, to serve the ever increasing IT demand from enterprises. This assertion is based on personal experience and I dont have any data right now. My observation is that, many big enterprises I have worked with, invariably have a big IT backlog.

Enterprises have tried various options, including outsourcing to tide over these issues. Outsourcing orgnisations have bigger resource pool of qualified professionals, and other enablers to help match demand. But there are situations where even outsourcing does not help in handling demand supply mismatch.

If lack of skilled professional for a particular skill is a reason for demand-supply mismatch,outsourcing can help here. Whereas,if lack of smoothening of demand for an entity within IT, making that entity a bottleneck is the reason for backlog then steady state outsourcing does not help. Which in turn gives rise to more of demand supply mismatch. Mind you this is not some fixed entity within IT shop. Any entity can be sucked into this situation based on its role within various IT projects that are going on. If your IT shop is organised on basis of SDLC roles then that entity can be pool of senior designers, system testers, even enterprise architects. Or if your IT shop is organised based on architetcural layers, then it can be front-end , business logic unit or database unit. Or if your IT shop is organised based on functional component then it can be any of the functional component.

It might so happen that large number of projects starting now, are going to hit that particular entity around the same time causing the demand surge.

What can be done to handle such situations?

One obvious solution that comes to mind, is dont start all the projects at once. But the problem is one cannot predict future demands and the situation can still arise even if you deliberately defer the projects, some other project might crop up in future which will have other imperatives (like business or regulatory) to start and cause the deamnd surge. Also the project budgeting and planning of IT shops happens periodically, which does not help. Well one cannot really have these activities aperiodically, so whats the solution?

Solution again is outsourcing. What we have seen earlier is a case of pro-active outsourcing, which does take care of some problems. For the problems arising out of demand surge, can be handled by re-active outsourcing. Outsourcing does offer an advantage, in terms of making IT expenditure 'variable cost', thus committing and withdrawing resources is easy. So IT shops can work out deals with outsourcing companies on a contingency basis, commiting some resources permanently to this continegency resource pool and an agreement to ramp this pool up in case of demand surge, ramp it down when demand ebbs. The advantage being
  1. Outsourcing companies do have resource pool which can absorb these demand surges.
  2. Outsourcing coupled with offshoring makes this otherwise dead investment, economically viable.

Outsourcing companies will have global knowledge, and can work out deals (billing rates, utilisation etc.) to their advantage. Its a win-win proposal.

And as an EA it makes me happy, because none of my strategic projects will be derailed because of demand-supply mismatch.

Tuesday, October 03, 2006

Evolution of an IT worker

IT folks, early in their life think, technology has solutions for all the ills within enterprise IT departments. There is technology solution for every problem. Something does not work, use that tool, automate this, do straight thru processing.

After a while they realizes no amount of technology is going to help unless there are proper processes. This is second stage of evolution, now with technology, processes are deemed necessary. Our IT pro is on process building spree. He may even build processes to define processes. And then after toying with technologies and processes for a while, he realizes it is not really having desired effect on enterprise IT scenario.

Thats when he realizes importance of people. Empowered people, who has bought into your ideas can make things happen. This is the next stage of evolution. Everything is achievable by right kind of people, we dont need any technology and/or processes.

And then when this people only approach fails miserably then our IT pro achieves his nirvana by recognizing that it is the judicious mix of right amount of technology, effective processes and efficient people is what makes IT work for an enterprise.

It is very important for an enterprise architect to bear in mind, this evolution. For he has to deal with folks at different level of evolution. He will have to work with many technology task masters, a few process pundits, fewer people's politicians and even fewer who have achieved IT nirvana. To keep delivering right enterprise arcitecture and sustain it, he has to take all these people on board and make use of their enthusiasm and leanings properly. Within themsalves they present right mix of people to define, build and govern an efficient enterprise architecture organisation.

Needless to say an enterprise architect must have reached this IT nirvana himself, to realize this.

Friday, September 29, 2006

Enterprise architect, solution architect, whats the difference?

I see most architecture practices have progression marked from solution Architect to enterprise architect.So what does it take to make this progression?Is it, for example, that once you have been solution architect for 'n' projects you are qualified to become enterprise architect? or is it if you have been solution architect for 'n' types of project hence you can become an enterprise architect. Or is it analogus to a caterpiller becoming a butterfly? Is there a moment of Zen, when a solution architect becomes enterprise architect? Can an enterprise architect descend to become a solution architect? Is it really a descent?
Let me attempt to answer these questions per my understanding.To me a solution architect provides a framework so that a sound solution can be designed and implemented. Since a solution typically spans multiple orgnisational entities within enterprise, the framework thus established, in a sense is valid for entire enterprise. So what value does an enterprise architect add, over and above this? An enterprise architect has to set up such a framework for entire enterprise (and not restricted to some entities within it). A solution architect has some freedom in setting a framework for his solution, based on overarching framework for enterprise. He can override enterprise wide framework, if his solution so demands, after following governance protocol. A solution architect can extend the framework and make it more granular. That is, enterprise wide framework will be more coarse grained, whereas solution level framework will be more fine grained. This solution level framework will have some reusable parts, which can be envisaged in any solution. Those should be moved to enterprise framework. Lifecycle changes happen in a solution level framework till the solution gets deployed and then the framework is frozen. Whereas an enterprise architect has to make sure his frameowrk is deployed rightly across various solutions and govern the changes or diversions from it. He also has to keep evolving organisation wide framework, all the time.
So from an Object Oriented viewpoint a Solution architect is a base class (appears counter-intuitive). An enterprise architect is a derived class. An instance of enterprise architect is also an instance of solution architect, but an instance of solution architect is not an instance of enterprise architect. Once a soluition architect develops the ability to genralize and abstract architecture concepts, he can progress to become an enterprise architect.

Friday, September 15, 2006

Requirements management is crucial

It is pertinent to ask how other disciplines of engineering are able to build systems with guarantees, whereas software engineering cannot guarantee anything. Is it due to 'softness' of software or are there fundamental differences?

One fundamental difference between business systems and the systems built by other engineering disciplines is that the later deals with physical world, which is well understood. There are precise models of systems, which scale up. We in software engineering equate models with pictures. But what I mean is that they have higher level of abstractions to express the detail. Be it a set of partial differential equations, or a complex formula of that kind. The key is that these are models rather than enumeration. And system requirements can be expressed in terms of these models and measures used in these models.

E.g. one can specify behaviour of physical system using properties such as Pressure, Temperature. Then one can specify requirements in those terms. The implementer knows how components behave for given input and can construct an implementation meeting the requirements. Implementer can find out what will be the value of Temperature given Pressure. So he can decide how to set pressure so as to achieve the expected temperature.

I think equivalent measures for business systems are Data and Process. Unfortunately we have no clue how they behave in isolation or what is their interrelationship. There are no partial differential equations describing behaviour of data and process. So we have to specify the requirements as enumeration over data and process.
Imagine if an engineer had to specify what will be Temperature for every value of Pressure, and then describing what Pressure range he wants the system to operate in with what max. temperature. Yet, this is what we do as requirement specification in software engineering.

And if your enumeration is not complete, you have a flaw in specification. And nobody can build correct working system, with a flawed specification. Of course we still have a problem of converting a specification to implementation, without introducing any error! The MDA approach is trying to address that. Also manual way of converting requirements into implementation is well understood now. It is a problem, but a better understood one.

Yet there are some systems built using software engineering, especially aviation related or embedded systems (like one found in washing machines), where a more formal approach is adopted. The requirements are specified formally and then validated before implementation is built. It is possible because either the requirement can be specified, as in other branches of engineering, using scalable models or the enumeration is tiny, hence without problem of scalability.

So for business information systems there are following problems,

  1. Abilities to represent the abstract specification (the model) aren't mature
  2. Completeness and correctness of such a model is difficult to establish
  3. Scalability of such a model (model becomes bigger and more complex than the system, for large systems)
  4. Converting such models into implementations (if you can address all of above, then you can use MDA approach to achieve implementation or do it manually)

These are not easy issues to handle, especially item 2 and 3 in the list above.

What is this got to do with Enterprise Architecture? Well, as an Enterprise Architect one is helping build information systems for enterprise while trying to reduce TCO, protect investment and avoid obsolescence. All the modern approaches, which aim to simplify the building and maintaining of information systems, (e.g. ESB, SOA) have a need for requirement specifications, with its own representation mechanisms (e.g. BPMN). If one is not aware of these fundamental issues in systems requirement specifications, then these implementations are not going to succeed. So while one can delegate the implementation part to vendor tools and project teams, EA must take charge of building, managing and governing these requirement specifications as top priority item. This is also a key to better business IT alignment.

Tuesday, September 12, 2006

Services as contract

Recently we had a visit from Bertrand Meyer (Eiffel fame). He mentioned how Eiffel langugae helps make contract explicit between user and supplier. He also elaborated in what all different ways the contract definition can be used. (One of the interesting uses he showed was that of 'push button' testing). When I asked him that how does the language discover the contradictions in contract specification, he said "It does not". Which was kind of disappointing, a wrong contract specification, howevere faithfully implemented by the implementor, is no good.

This incidentally is the point I made in one of my earlier posts. Not only, one must make the contract explicit, as most SOA proponents are proposing, but also we must have ability to discover contradictions in the contract specification. We must have ability to see, if multiple contracts can co-exist and satisfy some common properties. It is not easy. However this is an important aspect of contract specification and must be addressed. The service definition should define how the compatibility would be established between various services (which essentially are contracts) and governance must ensure that this continues to be the case. Such defintion time checks, will help prevent costly budget and time overruns not to mention prevention of service proliferation and governance nightmare.

Friday, September 01, 2006

Enterprise architects must show leadership

I have observed that in most organisation Enterprise Architect (EA) community is viewed as 'ivory tower idealogue' and creating nothing but 'PPTware'. In entire IT orgnisation no other community is derided more than Enterprise Architect community (may be project management community can compete with EA community).

The business and IT leaders sanction Enterprise Architecture organisation and budgets, partly because all analysts point to such a need. But I am not sure how convinced they are about necessity of Enterprise Architecture. The solution implementor(for lack of better word) community always wants to be left alone and do not want to be dictated to, by Enterprise Architect community.

This is not an ideal situation to be in for Enterprise Architect community, where neither your superior nor your sub-ordinates have any faith in you. Is it because Enterprise Architect community always devises these nice 'end games' or 'to be state' or what have you, but fails to lead the IT organisation to that utopia? The probelm arises when in order to reach the end-state, what needs to be done in near future is not spelt out clearly. How does one trade-off the pressures of business changes, changing technology, organisational culture and still work towards the desired end state? Enterprise Architect community must provide practical answers to address these trade-offs without losing site of end state. This is very difficult.

Just to site an example: In an IT organisation, I was working with, all funding was tied to business benefits. Now some of the infrastructural projects, that needed to be carried out in order to reach a desiraable end state, did not have any chance to be implemented. Because cost benfit analysis will stack up heavily against such project. One cannot blame business for having such stringent benefit centric approach, because in the past business had burnt millions without IT producing a single usable artifact. An Enterprise Architect needs to tread thru such situations, and provide viable and practical approaches.

This is, in essence, challenge to Enterprise Architect community and when Enterprise Architect community successfully tackle these situations, it will gain the respect of overall IT community. So the job does not end with defining Enterprise Architecture, but it is a mere start. The real challenges are in governance and deployment of Enterprise Architecture and showing necessary leadership.

Monday, August 21, 2006

SOA Questions answered

I am thankful to Sam Lowe for pointing out work going on at OASIS forum, which seeks to address core issue of service definition. The published SOA reference model does provide necessary conditions as to what a service definition entails. These definitions are more from point of view of a service provider. They state, if I have a capability, which I want to expose as a service what minimally I must provide for the capability to be successfully exposed as service. This will help address questions such as

  • What is its execution context (i.e. pre and post conditions)

  • What are the expected service levels? Etc.
There is one more addition I would love to see to this reference model. This is more from the point of view of a consumer, which in turn can help service providers in service evolution. This addition will help answer questions for a service consumer such as

  • If I do not have the right service that I need for the task at hand, can I choreograph existing services into a composite service using services I have?

  • Will this composite service have required functional and non-functional properties?

This would need specification of composability in both functional as well as non functional sense. I should be able to specify a ‘Plan of Action’ using services and should be able to specify functional and non-functional constraints on this PoA. This PoA is not necessarily only temporal (e.g. like a process) but can also be structural too. (E.g. like a ‘Join’ in relational world). Reference model then can constrain the service specification such that using service specification I should be able to check viability of this PoA.

Usages of this PoA are manifold.
E.g.

  1. A consumer can request a PoA to be executed instead of individual services. SOA infrastructure can execute this PoA, possibly using an ESB. This specification will help ESB infrastructure to do necessary optimizations, to provide required non-functional capability.

  2. If there are enough consumers asking for this PoA a service provider may decide to provide this PoA as a composite service. Thus helping service providers evolve the services that their consumers demand the most.

Tuesday, August 08, 2006

SOA - more Q&A

As far as SOA goes the predicament of service granularity is going to haunt everybody.What level of services? Would we have elementary services, which then are composed into composite services usable by services consumer? Or lets not worry about granularity, just define the services from consumer perspective and be done with it. Oh, what about evolution of services? What abput multiple perspetives? Questions galore, no answers...
IBM fellows do talk about this and in general about maturity of SOA. They have put some material which touches upon these areas. I hope they have some answers.
Interestingly IBM's SOMA (appears to be an hybrid approach which borrows from MDA) looks most promising. But as IBM says, the whole area is so immature. For now, it might suffice to say that SOA is a journey and may talk a long time to finish it. Have patience, this is the right path but a slightly longer one.

Friday, July 21, 2006

Walled web 2.0

More I think about web 2.0 and SOA, more I get concerned. What with loosely coupled services available anywhere, anytime replacing the applications. Is it all desired?
Well I guess, when the hype dies down, the CIOs will realise the necessity and usefulness of these ideas are bounded within some context. One does not really want to make services available to all and sundry and create unpredictable demands for your infrastructure. Not to mention, security and privacy nightmares.

As is with any networked system of systems, one must draw boundaries. One must define authorisation, ownership and access rights withing these boundaries. There really cannot be anywhere, anytime services. The services are walled, the Web 2.0 is walled. Or else it is pretty nigh unusable.

Friday, July 07, 2006

What does Web 2.0 mean for enterprises architect

A very concise defintion of Web 2.0 is that it treats web as a platform and lets user control the data. There are services instead of applications, and user composes applications from these services per his need, using web as platform. The particpation from various sources to achieve a result collaboratively is another core theme of Web 2.0 combined with a better usability and richer experience.

What really does Web 2.0 mean to an enterprise. Is it for real? What are the benefits for enetrprises accruing because of Web 2.0? What are the pitfalls?

There an any number of instances in an enterprise when you hear users complaining about lack of availability of data. Sometimes its available at wrong granularity, sometimes it is not as current as required and sometimes it is not available when required. Surely those users will be elated at definition of Web 2.0. The authorisation and authentication is the only thing between user and data. However, the granularity guarantee is another tricky bit. If suppliers dont get it right, data will be useless for users and conseqeuntly there will be differential service proliferation. If it is too granular, performance penalties are to be borne by users. So its not all that rosy. Huh?

In Web 2.0 world, a supplier cannot control usage of its services. So once a service is out there in open, one can not easily change it without affeting known and unknown users. So supplier have got limited chances to get service definitions right. Otherwise there will be horizontal service proliferation.

AJAX promise of desktop quality user experience in Web 2.0 is seen to be believed. But it does open a can of worm on security front. Which needs to be addressed.

Collaboration between stakedolders in an enterprise is sought after. But it needs to be bounded by authorisation and authentication. For example disintermediation between stakeholders, sometimes desired, may threaten the business itself and hence discouraged at other times. What kind of collaboration is allowed and what is disallowed is tricky to define, much less enforce. Data privacy issues are not to be taken lightly and need a serious thought.

So an enterprise architect needs to be aware of these broad issues before plunging headlong into Web 2.0 .

Friday, February 03, 2006

Back Again!!!

Well, I am back again after a long hiatus (Did I spell that right?).
SOA doe not seem to have progressed much.
I still see old issues, and nobody seems to be addressing them.
Focus seems to be getting SOA infrastructure in place. Worry about actual services later.

Well, I feel whole thing is going to unravel like any other previous silver bullets.
All tall claims and no returns. Business folks will be after IT peoples heads after this latest craze dies down. God bless those CIOs taken in by SOA hype...

Saturday, December 30, 2006

Requirements elicitation

The ways and means in software design, build and test has progressed in leaps and bounds, in recent years. But when it comes to requirements management the IT community is still stuck in stone age. The formal approaches have been tried and have not worked tremendously well in business IT scenario. In one of my earlier posts (Requirements management is crucial) I have tried to analyze why a formal approach may not work well with business IT systems.

That being said, even in a semi formal ways and means that we use for requirements elicitation and management, we can do much better. For example, in my current project one of my business analyst (BA) was complaining about how users are inept at giving requirements. I have heard that umpteen times till now. What we in IT fail to understand is that, business users and IT are two different cultures trying to speak with each other. Not only cultures are different, even languages are different.

My BA holds a workshop to elicit requirements and use cases are his means. But business users concerned, have never heard of the use case terminologies. They cant specify their requirements in a structured way the use case expects them to. It will help them tremendously if we tell them beforehand what we expect from them, in what format. May be a mock excercise or a working example can do the trick.

Another thing we fail to understand is that there are multiple types of requirements, and use case focus on requirement tend to combine them together. There are strategic and policy level requirements, which would be common across multiple use cases, whereas operational requirements will be specific to use cases. When we mix all these together in a use case, we are more often than not going to get requirement changes, because users empowered to specify these different requirements are different. When operational user speaks on behalf of policy makers, he is making a mistake. And we in IT are encouraging him by pressurising him to specify his requirements faster!

We need to have better mechanisms to handle this than plain use cases. I sincerely wish there be more tools in this space to help my BA and users alike. Till then I have to devise manual means to identify operational requirements and strategic requirements, and manage their life-cycles seperately. It will then be clearer with my BA, why is he getting so many requirement changes.

Wednesday, December 27, 2006

Enterprise architecture in MDA terms

Recently one of my colleagues quipped about UML being nothing but pretty pictures. But at the same time he wanted to use MDA for EA. He pointed this documents as a good start.

I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.

I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.

MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.

Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.

I find it good that interest in MDA is reinvigorating.

Thursday, December 14, 2006

Services are not ACID

ACID stands for atomicity, consistency, isolation and durability. Any transactional system needs to provide these properties. In file based system, it was the problem of designer and implementor(coder) to arrange for these properties. Then with advent of RDBMS these responsibilities were taken up by RDBMS. Again, in early days of distributed computing, the responsbility (of two phase commit) fell on designers/coders. Then came XA open architecture and introduced transaction co-ordinator in every RDBMS. Life became easier after that (from ACID perspective).

ACID is critical for transactional systems. Services in SOA do not guarantee ACID properties. They require compensatory services to achieve ACID effect. Those of you, who have worked with 'file' based information systems and then moved onto work with RDBMS, will understand my concern about services not being ACID, more. Because in a OLTP world, a transaction followed with some delay by a compensatory transaction, is not exactly equivalent to a ACID transaction. This may not affect in most cases, but when it does, the effect can be pretty nasty. Especially when a service executes an automated protocol in one service call and then try to undo its effects using a compensatory service. This kind of thing happens in a straight thru processing scenario. This remains one of my major concerns.

SOA, funding and hidden gems

There are some good posts on services and their characteristics. The main thought of these posts, is to bring out characteristics of an enterprise service. There is a comparison between city planning and SOA. A thought encourages different granularity of services to co-exists with an anlogy to niche retailers v/s Walmart.

However the way projects are funded in enterprise IT will hinder this approach. Enterprise IT, as far as funding goes, is governed more like a communist polit-buro than a capitalist entreuprunial system. There are no venture-capitalists ready to fund (seemingly) wild-vackey ideas. They would rather go with proven ways of doing things. So the idea of having niche services co-existing with run-of-the-mill enterprise services is a non-starter. That does not mean it will not happen.

Traditionally departments moon-light on their budgets and create fundings for these niche capabilities (albeit not in services form), but then those capabilities remain outside perview of enterprise architecture and remain in back-alleys, hidden in enterprise hiddenware. Which also prevents proper utilisation of these capabilities. Same thing can happen in an SOA. Departments will create niche services, and somehow fund them. But these services will be below the radar for rest of the enterprise.

An enterprise architect, has to live with this fact of life and provide means to unearth such hidden gems and bring them back to EA fold for governance. As mentioned in the posts mentioned above, a collection of such niche services may be a viable alternative to a coarse grained enterprise service, only if we know such niche services exist.

Solving the funding issue, to borrow a term from Ms. Madeline Albright,is beyond payscales of most enterprise architects and best left to business leaders to figure out!

Monday, December 04, 2006

SOA and demonstrating ROI

Typically large enterprises in BFSI space have a conservative cost benefit accounting practices, focused on accountability. Within these enterprises IT departments are often flogged for not achieving ROI. This also makes it a major area of concern for an Enterprise architect.

In 2007 trend analysis by IT experts, one of the major trends mentioned is about difficulty in demonstrating ROI. Traditionally demonstrating ROI was a problem in IT shops. With stovepipe architecture, at least you could attach your costs to relevant information systems and could figure out benefits accrued because of that information system. Whether it made sense or not, to have particular information system was obvious. Of course problems related to intangible benefits accounting was always there.

The situation became difficult with sharing of infrastructure be it client server systems and lately with EAI. However with SOA gaining traction the issue of demonstrating ROI is becoming even more difficult to handle. There are several issues, ranging from, when a project considered delivered, how costs are measured, apportioned and paid for, to what constitutes benefit and how to measure it. Most of the time IT folks are on defensive, in these debates.

There are problem with measuring both, the fixed costs incurred while building these systems as well as running costs incurred while running these systems. On fixed cost front, the problem arises mainly because of the reuse of infrastructure and non-functional services in a seamless manner. Please note this is different than any sharing of infrastructure you may have seen earlier. Earlier even with sharing, there were clear cut boundaries between users. Now with SOA, the boundaries are blurred, as far as infrastructure and non-functional services goes. One does not know how many future projects are going to use these services (with or without refactoring). Hence they have no clue how to apportion the fixed costs to users. This sometimes turns projects away from using the common infrastructure, as the up front costs are deemed too high. If yours is the first project you rue the fact that you have to build the entire infrastructure and own up all the cost. The enterprise needs to have a clear policy for these types of scenarios so that projects know how cost is going to be charged.

On running costs front various cost elements, e.g. ESB, network connections, underlying applications etc. need to be metered at the granularity which makes sense for the accounting purposes and which helps tagging metered data with a user. Here user is meant from a cost benefit accounting perspective and not actual business user. This metering is not easy. Most of the times underlying IT elements are not amenable to be metered at the level of granularity which helps you connect the metered data with users. Sometimes, metering at such granularity adds an un-acceptable overhead so as to breach non-functional requirements. I have not seen any satisfactory solution to this problem. People have used various sampling and data projection techniques. These are unfair in some scenarios and costs get skewed in favour or against some information systems. The applications part of this run-time metering is relatively easy but it still has problems of adding overheads and breaching non-functional requirements. So people use sampling and projection techniques, here too. But luckily there is not much seamless reuse of application services hence these sampling does not skew costs drastically.

As for benefits the debate is even more intense. The debate begins with the definition of when the project is deemed complete and starts accumulating benefits. e.g. In a business process automation project using SOA, with iterative delivery, one may automate some part of business process which results into some benefit, but end to end business process may actually suffer during this transition, because it may have to carry on with manual work-around. So how does one measure benefits in this case? With intangibles, there is a perennial problem of how does one measure intangible benefits? Sometimes even measuring tangible benefits is difficult, because the data for comparison is unavailable. As with costs, to measure benefits one needs metering at various levels for measuring elements of benefits (e.g. time, resource usage etc.). All the metering issues faced by cost measurement are also faced by benefits measurement as well.

So the key problem is working out semantics of costs and benefits with the business folks, putting a programme in place for their measurement in conjunction with SOA. If SOA is combined with a measurement programme, then it may be possible to demonstrate the ROI with these agreed definitions. This measurement programme is peculiar in the sense that for deployment it can be clubbed with SOA but it has its own separate governance needs, apart from SOA. So it needs to be handled appropriately. This is more than BAM and covers even IT aspects too. May be we need a new acronym, how does BITAM (Business and IT activity monitoring) sound?

Saturday, December 02, 2006

Agile methods to kill outsourcing? I dont think so.

I hear an industry thought leader vigorously promoting Agile Methods. Nothing wrong with that. Only problem is there is an underlying thought which I dont agree with. The thought being Agile Methods can kill outsourcing. I would like to provide a logical argument to support my opposition to this thought.

For this we need to understand the philosophies behind getting a job done. The two prominent schools of thought are Adam Smith's one and Hammer-Champy one. Adam Smith had proposed a division of labour centric approach. To get any job done, the roles were clearly segregated. Each role needed to have specific skills. The other inputs apart from the skill that particular role needed, were provided by roles above it. The system was made up of a lot of specialists. Each specialist made only those decisions that were confined to its role and sought the decisions beyond their own role to be made outside, irrespective of its effect on the job at hand. (any government office anywhere in the world,is a good example of this approach, so are some public sector banks in India).

It worked fine for lot of years. But like any system, it developed abberations. The big bureaucrocy that these division of labour created started having bad effects on working of systems. Thats when Hammer-Champy came up with their revolutionary ideas on business process re-engineering. They proposed systems with generalists instead of specialists. These generalists were to be supported by enabling tools, such as improved IT systems, better collaboration tools such as fax, telephone, e-mails etc. They made multiple decisions, irresepctive of specilaization they had. They used enabling tools to make those decisions and in rare cases when they could not make those decisions, the job was transferred to actual specialist. This mode of getting job done has caught on and examples are everywhere to see. (Any private bank in India or anywhere in the world is an example of this, where bank teller supports all the functions from opening bank account to withdrawing cash).

This approach works, because it is only rarely one needs services of true specialist, as compared to, a generalist supported by enabling tools. One can see parallel between these and the way software development itself is evolving. The traditional SDLC with BDUF (big design upfront) follows a smithian approach. It expects a lot of specialists to collaborate to develop software. It needs a very heavy process support. That when you have ISO/CMM coming in.

Whereas Agile method appear to follow a Hammer-Champy approach to software development, with a slight variation. It relies on a multi-role specialist, instead of generalist. These multi-role specialists perform multiple roles themsalves. They are specialists in these multiple roles (either because of their training or experience or both), hence they dont need either a big process support or support from a lot of other specialists. The people who think this can kill outsourcing appear to base that conclusion on following logic. Since multi-role specilaists are in short supply and difficult to create, the outsourcers cannot have enough of them. Hence outsourcing will stop. Thats what the logic appears to be.

But as I had discussed in one of my previous posts about innovation shown by outsourcers, this one too can be handled innovatively. One can always replace the multi-role specialist with a generalist supported by enabling tools and achieve same result, as originally envisaged by Hammer-champy. One cannot beat support systems provided for such a genralist by large outsourcing organisations. The large outsourcing organisations have benefits of sharing humungous amount of knowledge, which even multi-role specialists dont have access to. So agile methods should not be viewed as just an antidote for outsourcing.

As outlined in one of my earlier posts, both these approaches (viz. agile and traditionla SDLC) are valid and are valuable in different contexts. The IT leaders need to choose appropriate methods based on their needs. As an Enterprise Architect, its my worry to provide appropriate governance controls in an uniform framework which works for both these approaches. It is of vital importance that I put these governance controls in place so that the roles make only those decisions they are empowered to make. Because it is very easy to confuse role boundaries in these two drastically different approaches.

Thursday, November 30, 2006

No stereotyping please!

Long time ago I was a starry eyed (bit of exaggeration here) entrant into world of IT, when the IT revolution in India was about to begin. I was part of elite 'tools group', using translator technologies to build home grown tools for various projects that used to come our organisation's way. Amidst all those small projects a big depository from western world developed enough faith in us. It asked us to develop their complete software solution. The visionaries from my organisation did not do it in normal run-of-the-mill way. They decided to build home grown code generators, to insure consistent quality and created a factory model of development. I was one of the juniormost member of the team which built and maintained those tools.

Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.

During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.

It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.

And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).

Monday, November 27, 2006

SOA in enterprises or Hype 2.0

If dot com in enterprises was hype 1.0 then surely SOA in enterprises is coming very close to becoming hype 2.0 . The way SOA has been touted as next best thing to happen to mankind since sliced bread brings it closer to that dubious distinction. The vendors are promising all kinds of things from flexibility, adaptability, re-use to lower costs if you use their merchandise to do SOA. SOA is good as long as decision makers can seperate hype from reality. I for one will be very saddened if SOA goes the some way as dot com hype. Following discussion is to seperate hype from reality so that decision makers have correct expectation, to enable them to move along the path of sustainable SOA.

1. Myth of reusable services

In my experience as architect I have never seen as-is reuse of a business service implementation. Some amount of refactoring is needed for it to be reused. The refactored business service actually harbours multiple services under a common facade. For a service to be as-is reusable it needs to be so fine grained that it will have problems related to non-functional attributes of services. Just to give an example, if I had a business service providing customer details along with his holding details given a customer identity, then I have couple of options in its implementation.

I) I can build it as a composite service composed of more granualar services for customer detail and holding detail.
II) I can build a monolithic service for providing both customer and holding details

Now remember the lesson we learnt in managing the data. Always do the join at the source of data, because at the source you know more about actual data and can do many more optimisations compared to away from source. (Remember the old adage don't do join in memory let RDBMS handle it?). So from a non-functional perspective (scalability and performance), option II) is very attractive and some times mandatory.

No doubt, option I) gives me more re-usable service. But it still does not give me absolutely reusable service impementation. For example if I need the customer details with holding details for three different kinds of execution scenario, viz.
a) an on-line application for customer service,
b) a batch application to create mass mailer and
c) a business intelligence application to understand customer behaviour (with holding as one of the parameters).

Even though I have more granular services, all of them are not usable in all these different execution context. I cannot simply call the granular services in a loop to get the bulk data needed for scenario b) and c) above. So the re-usability is restricted by execution context.Of-course you can throw hardware at this problem, to solve it. But then your costs escalate and any savings you made by reusing software will be more than offset by hardware costs. So just because you organise your software in terms of services (which essentially specifies the contract between user and supplier and nothing more), you are not going to get re-usability. It will enable re-usability within an execution context but not universal re-use. So if we treat Services as explicit contract specification between users and suppliers then we should attempt to reuse these contracts. This however does not automatically translate to implementation reuse.

2. Myth of composite applications

This myth is related to the myth above. In most other engineering disciplines, the real world components are standardized and higher level solutions are typically component assembly problems. Not so in software. Even if we have services, their assembly does not necssarily behave within accepted parameters, even though a single service might behave OK. So composing implementations, to arrive at a solution is not so straight forward. Many vendors will have you believe that if you use their software, most of your software development will reduce to assembly of services. This is not true for following reasons. What is the correct granularity and definition of services is known to user orgnisation than vendor. These service defintions are dictated by user organisation business practices and policies. Each organisation is different, so a vendor can never supply you those service definitions. If a vendor does not know how the services look like and what their properties should be, how on earth is he going to guarantee that composition of such services will behave in desired manner? And as outlined in point above, the implementation reuse is a big problem. So even on that front vendors can not help you. So the composite application will remain a myth for some time now. The vendor sales and marketing machinery will show you mickey mouse applications built using composite apps scenario. But demand to see atleast two productionized composite apps, where majority of constituent services of apps are shared between those two. My guarantee is, you wont find any.

So is SOA a BIG hype about nothing. Not exactly. It does provide following benefits.

1. Manageability of software with business alignment

The single most important contribution of SOA is that it connects software with business. In an SOA approach, one can make sure that all software is aligned with business needs, because all software is traceable to their business needs. The whole edifice of building, maintaining and measuring utility of software will revolve around business services in an SOA approach. So it becomes easier to maintain focus on business benefits (or lack thereof) of software. With the traceability it provides, software becomes a manageable entity from being an unwieldy and randomly interconnected monolith. And there is some reuse possible in terms of non-functional services (such as security, authentication, personlisation etc.).

2. Ability to seperate concepts from implementation

The next important contribution of SOA approach is the focus it brings on seperating interface from implementation. The logical extension of this approach is to seperate conceptual elements from platform elements. So if you are using SOA approach towards software development, you have necessary inputs to create a large scale conceptul model of your business. You just need to filter out platform specific stuff from the interfaces you defined. You can further distill these interface specifications to seperate data and behaviour aspects. These are really reusable bits within your business. It is very easy to figure out how exactly these reusable bits can be implemented on different implementation platforms. This will give you necessary jump start for your journey towards a conceptual IT world.

So in my opinion SOA is good and it is the way to go. But not for the reasons stated by vendors. It is not going to make software drastically cheaper nor going to make software development drastically faster. Its just a small step in a long journey towards making enterprise software an entity managed by business folks rather than IT folks.

Tuesday, November 07, 2006

Agile, Iterative or Waterfall?

There is been a lot of interest and mis-conceptions about various life cycle methods for solution development. Please note carefully I am saying solution development and not software development. Enterprises develop solutions to the problems. The software content of the solution is developed by IT sub-organisation. The rest of it is assigned to different sub-organisations within enterprise. So when we discuss software development life cycle methods (I'll use short form SDLC henceforth), we must remember solution development lifecycle methods (I'll use SolDev as short form, henceforth) as well. A software development and deployment method has to synchronize with solution development and deployment method.

There are various SDLC methods in vogue. Waterfall method has been in use for ages and has its supporters and detractors. Iterative methods originated some time back and are in use in many enterprises. Agile method is the newest kid on the block and yet to make serious inroads into enterprise IT scenario.

Waterfall is a sequential method, waiting for previous phase to finish completely and expects it to deliver a signed and sealed deliverable. This deliverable is enhanced in the next phase till software gets delivered. It assumes that the requirements are well understood and wont change during software development. It is most risky of development approaches and has quite a large failure rate.

Iterative method is iterative as it's name suggests. It creates a initial, fully functional version of system and iteratively adds functionality to it to make it complete. During each iteration it also takes into account user's feedback for the earlier delivered functionality and corrects the implementation.

Agile method is a more aggressive version of iterative method, where timelines are shorter and sacrosanct. It also believes in face to face communication rather than written documentation.

Each of them has their own strengths and weaknesses. And whether to choose one over other is not a trivial decision.

A solution development method is normally iterative or sometimes waterfall but rarely agile. Normally solution development and deployment involve dealing with real life things and they are not as soft as software. That may explain why they dont use agile methods that much.

Typically quick-fix and operational solutions rarely involve a big solution design and deployment effort. Major effort is consumed in software development and deployment. Hence agile methods can be deployed as SolDev method. whereas tactical and strategic solutions involve a significant solution design and deployment effort so an iterative method appears a right choice for SolDev. Modern enterprises rarely use waterfall method as it is too fraught with risk. Again I am referring to intent of the solution and not the systems, when I say operational, tactical or strategic.

For example,

If you were to repair a leaking window in your house. You would call the tradesman, interact with him and get the job done in a day or two. You will give constant feedabck and get it done as you want. This is a quick-fix solution and agile method can be (so to say) SDLC method.

Whereas if you were to add a conservatory to your house, you may have to interact with lots of tradesmen (or you outsource to a contractor), you have to worry about changing furniture setting in your house and may have to change nature of the games in your kid's birthday party. Thats a tactical solution and can hardly be agile. You may iterate over the development of this solution, by first building the conservatory then adding the furniture and relocating existing furniture. You also have to think about new games to include in birthday party, which take advantage of the conservatory and furniture settings. Here actual building of conservatory is like building software and other things you do is part of solution development and deployment. Both these need to follow same life cycle methods otherwise you'll have problems. And agile method for both SDLC and SolDev wont work because you would not have bandwidth to support sofwtare development (i.e. building conservatory) as well as solution development (i.e. doing other stuff such as buying new furniture, relocating old one ). And just SDLC can't be agile because rest of the solution will not be ready anyway.

Same goes about building a new house altogether. Thats a strategic solution. and you would still want an iteartive solution. Build the basic structure of the house. Add all utilities, then interiors and finally finishing. Constantly giving feedback and checking for yourself how the house is getting built.

You were to do it in waterfall model. You would call in a contractor tell him what you want and hope he does it before your required date. Well, if it is something as standardised as house building and contactor is reliable you may consider this option.


So its quite clear that different life cycle methods are suitable for different kinds of SolDev and SDLC. They have their strenghts, but need to be deployed in right kind of scenario. An enterprise architect needs to define the decision framework for making this choice, within an enterprise.

To reuse or not to reuse?

As soon as I posted about reuse, a old colleague of mine did want to reuse a small piece of code developed by me quite a long while back.

It was nothing great. When we were attempting to make client server connection to CICS in good old days, we were hitting the limits on CICS COMMAREA. We thought if we compress our message, we would not hit the limit. Since it was over-the-wire message all content was required to be non-binary. So one place where we thought we can save space was if we packed integers into higher base representation, because those messages had a lot of integers. So a base 64 representation of integers would take 6 digits as against 9 in a base 10 representation, and still would be capable of going over wire. This piece of code was required to be as optimal as possible. So we had developed a function which would pack integers into base 64 representation. And we had used a trick to make it faster, by taking advantage of EBCDIC representation. It is part of the library we supply along with our MDA toolset.

My colleague wanted to reuse the code, albeit in a different scenario, as it was well tested and is in production for so many years. Needless to say he would have fallen flat on his face, if had blind faith in the code. It would have failed because it relied on EBCDIC representation and he was trying to deploy it in a non-EBCDIC setting.

Why am I narrating this story? It just remphasises my point about implementation reuse. Well, even with best of intentions and support, implmentation reuse is not as easy as it looks. My colleague was lucky to have me around. Who would think such 50 lines of code can go so horribly wrong in a different execution context. If we had seperated the concept from implementation in this case, and generated an implementation for my colleague's execution context it might have worked. But without that he has to do refactoring of code. Which may wipe out gains he may have received by reusing. I am not sure how I could have made that piece of code more reusable than what it is, without breaking non-functional constraint imposed on it by its own execution context.

Now with refactoring we could have a piece code which is more reusable than what it was, but my colleague would have to spend that effort to make it so. It depends whether he has that kind of investment available to him. And it still wont guarantee that it wont fail somebody else's requirement in a totally different execution context. It is making me more convinced that either have concept reuse Or be prepared for refactoring while reuse. And dont expect 100% reuse.

Monday, October 30, 2006

Dont build for reuse, reuse what you have built

All architects are very passionate about reuse. The reuse is one of the most admired and promoted principles. And often enough business sponsors of IT project seem to spurn the extra funding required to make ‘something’ reusable. That’s an irritant faced by most architects. So what is the problem? Why this seemingly smart person from business side cannot see the wisdom of building something for reuse and spend that extra cash? Is it possible that they are smarter than architects? Probably they know, by spending that extra cash they are not really going to get that reusable artifact.

May be we need to turn the argument on its head, to get the business sponsor’s perspective. Imagine an architect saying to sponsor, “Don’t give us extra cash so that we build something reusable, rather well save some cash by reusing some of the existing artifacts.” [That will be music to sponsor’s ear.] However it is a difficult statement to make.

An architect will protest, “How can we reuse something, which was not built for reuse? How do we know how well the artifact satisfies my functional requirements, let alone non-functional requirements? If I have problem in re-using it, who is going to help me out? How much additional effort I am going to spend in understanding and plumbing it with rest of my solution? Is that effort much less than the effort I would have spent in building it myself? Do I save on maintenance or am I doing ‘clone and change’ reuse?

If you look at it, most of this applies to something which was built for reuse too. How does one build something for reuse in a potentially unknown future scenario? With unknown functional as well as non-functional requirements? What organization (and of what size) one needs to support this reuse? Difficult questions! So being a pragmatic, I am inclined to give up the notion that something worthwhile can be built for as-is reuse in future, and tend to agree with those (smart!) business sponsors. Wait, before you declare me a traitor to architect community, I would like to state that some reuse is still possible.

What I believe is a reuse at conceptual level is still possible, and useful. So if we separate the concepts from implementations (a la MDA, CIM v/s PIM v/s PSM), we can reuse the concepts already built. And choose a more appropriate way, to connect with an available implementation, for the current scenario. What are these concepts? They can be anything conceptual, a business process, a conceptual data model, a set of business rules. They are closer to business requirements than IT design or implementation. Most probably you will use fragments of these conceptual elements than complete conceptual element. Does this need a full blown MDA tool set? No, not in my opinion. One needs to separate concepts from implementations using any notation that one is familiar with and have a repository of such concepts. So any new solution you are going to build, first step would be search in this concepts repository. You find something useful, (re)use it. The effort and organization needed for creating and maintaining such a repository is not huge. And of course you need to adopt a slightly different solution development practices to institutionalize this. [Well for a small consideration, we can help you there ;-) ]

So, don’t build for reuse, rather reuse what you have built!

Wednesday, October 18, 2006

Is single version of truth achievable?

Did you have a programme in your IT organisation to build 'the single view' of data for some subject area, say Customer or a Product? These types of porgrammes have different names Book of Records, System of Records, Single version of Truth and so on. Have you experienced the agony one goes thru to try and create a single view, acceptable to different view points that exists in an organisation? There always will be some view point, which would want some data at different level of granularity or different level of currency or both, from rest of the view points.

No I am not talking of CDI/MDM. The problem I am talking about is about defining what 'the single view' of a particular subject area should look like? Even after you decide how your single view of subject area should look like, there are further challenges of collecting, reconciling, cleansing and hosting data. Thats what CDI/MDM predominantly addresses. But who helps you in deciding what the right single view of particular subject area? Frankly, nobody.

So isn't a single view of a subject area, a misnomer? Let me make a logical argument. even when we build an IT system, we start with a nice third normal form data model. But the non-functional requirements, such as performance, scalability makes us abandon the third normal form data model and introduce level of redundancy, what we poularly call denormalisation. And we live with it.

Why not take a similar approach to data, at enterprise level. Lets accept the fact that, the different view points are not always reconcillable, and the single view of subject area is impossible to achieve. Why not build a fit for purpose single view of subject area. And there can be as many single views as there are different purposes. Some of the purposes may collaborate with each other and can reuse each others single views. So in reality there can be less number of single views of subject areas than the number of purposes. Ofcourse, you need to create mechanisms to keep these different single views in synchronisation. Well why create one level of indirection, why not go to multiple view underlying this single view directly? May be using a service facade? Well I have answered these questions in one of my earlier posts . so you would need these fit for purpose subject area single views. You can use MDM/CDI technolgies to build them.

And if you dont believe me then I have a bridge to sell you ;-)

Wednesday, October 11, 2006

Communism and enteprise architecture

It is important one learns how to solve common problems using patterns. It is also important how not to solve problems using anti-pattern. One of the greatest anti-pattern relevant for enterprise architecture, comes to my mind is Communism.

Communism is an ideology that seeks to establish a future classless, stateless social organization. There is very little for one to disagree with this noble goal. The problem arises when one tries to follow the migration path from 'as is' state to this nice and wonderful 'to be' state.

Enterprise architecture seeks to define similar nice and wonderful 'to be' state for enterprise IT and tries to provide a migration plan from seemingly chaotic 'as is' state. Therein lies the similarity and lessons for EA. What are the important lessons to be learnt from failure of Communism?

Centralised command and control alone cannot guarantee results

Enterprise architect sometimes try to rule by diktats. Thou shalt do this and thou shalt not do that... These setting of common principles are necessary. But failure of communism teaches us that a central politburo can command whatever it likes, but things need not happen on ground as per their diktat. At least there is no guarantee that spirit of these principles (diktats) would be observed. With just letter of the diktats enforced, expected results will not follow. So there should not be too much of dictating and whatever is dictated must be governed to make sure that it is followed in letter and spirit.

Evolution, rather than revolution, works

Communist ideologues decided that the current system is too broken to be fixed, hence they advocated a violent overthrow of current system and replacing it with a new (better/improved) system. such a revolutionalry approach did not work in practice. It is indeed nearly impossible to design a system from scratch as replacement of another working system and replace old with
new, in one go. It is more adivisable to chip at problems of old systems, replacing parts of it as we go along. It is also important to keep readjusting priorities as we go along, because nothing in this world is static.


Checks and balances are required in governance

Another ill effect of centralised command and control was corruption and general in-efficiency. The middlemen prospered without adding any value. So a proper set of decentralized checks and balances is absolutely a must for efficient governance. In enterprise IT world, business and IT folks exhibit certain amount of tense relationship. So EA must create a balanced mechanism where both sides are represented and heard, and decisions are made which are acceptable to both parties.Including all stakeholders, is a must for efficient governance. This would insure right solutions get developed and not poilitically correct solutions.

Stakeholders involved must buy into 'to be' state

All communist states had a significant number of capitalist who would never agree with 'to be' state, as stated by communist. So communists could never reach their desired 'to be' state, no matter whatever their migration plan was. Mind you these capitalists were not just ideologically capitalist, rather they had a stake in being capitalist. Pol pot, one of the communist meglomeniac, tried to address the problem by eliminating such dissenters on a mass scale. Even that did not work. So it is very important that a significant section of the bsuienss and IT organisation must buy into the to be state. Without which there will be too much friction and resistance to change. This needs to be remebered in conjuction with item 2 above.

People involved must be able to connect current happenings with 'to be' state.

The empty store fronts and long queues for daily essentials in communist states were not reconciling with tall claims of progress by central politburo. The communists did put man in space, and fired giant rockets. But where it mattered the most, daily lives of their stakeholders, they failed to deliver. Enetrprise architects also fall into similar trap. Having a set of principles, a town plan and what have you, will be taken with pinch of salt by common IT and business folks, unless you show the results on ground. Stakeholders must be able to connect things happening around them to the grand vision of enterprise architects, based on the results they have seen so far.

These are only a few of the lessons learnt from failure of communism, enterprise architects can keep learning from history of communism and make use of it as an effective anti-pattern.

Friday, October 06, 2006

Demand supply mismatch in IT shops

Demand management in enterprise IT shops is a perennial problem. The problem is actually of demand supply mismatch. There is a gap in the demand of qualified IT professional and supply, to serve the ever increasing IT demand from enterprises. This assertion is based on personal experience and I dont have any data right now. My observation is that, many big enterprises I have worked with, invariably have a big IT backlog.

Enterprises have tried various options, including outsourcing to tide over these issues. Outsourcing orgnisations have bigger resource pool of qualified professionals, and other enablers to help match demand. But there are situations where even outsourcing does not help in handling demand supply mismatch.

If lack of skilled professional for a particular skill is a reason for demand-supply mismatch,outsourcing can help here. Whereas,if lack of smoothening of demand for an entity within IT, making that entity a bottleneck is the reason for backlog then steady state outsourcing does not help. Which in turn gives rise to more of demand supply mismatch. Mind you this is not some fixed entity within IT shop. Any entity can be sucked into this situation based on its role within various IT projects that are going on. If your IT shop is organised on basis of SDLC roles then that entity can be pool of senior designers, system testers, even enterprise architects. Or if your IT shop is organised based on architetcural layers, then it can be front-end , business logic unit or database unit. Or if your IT shop is organised based on functional component then it can be any of the functional component.

It might so happen that large number of projects starting now, are going to hit that particular entity around the same time causing the demand surge.

What can be done to handle such situations?

One obvious solution that comes to mind, is dont start all the projects at once. But the problem is one cannot predict future demands and the situation can still arise even if you deliberately defer the projects, some other project might crop up in future which will have other imperatives (like business or regulatory) to start and cause the deamnd surge. Also the project budgeting and planning of IT shops happens periodically, which does not help. Well one cannot really have these activities aperiodically, so whats the solution?

Solution again is outsourcing. What we have seen earlier is a case of pro-active outsourcing, which does take care of some problems. For the problems arising out of demand surge, can be handled by re-active outsourcing. Outsourcing does offer an advantage, in terms of making IT expenditure 'variable cost', thus committing and withdrawing resources is easy. So IT shops can work out deals with outsourcing companies on a contingency basis, commiting some resources permanently to this continegency resource pool and an agreement to ramp this pool up in case of demand surge, ramp it down when demand ebbs. The advantage being
  1. Outsourcing companies do have resource pool which can absorb these demand surges.
  2. Outsourcing coupled with offshoring makes this otherwise dead investment, economically viable.

Outsourcing companies will have global knowledge, and can work out deals (billing rates, utilisation etc.) to their advantage. Its a win-win proposal.

And as an EA it makes me happy, because none of my strategic projects will be derailed because of demand-supply mismatch.

Tuesday, October 03, 2006

Evolution of an IT worker

IT folks, early in their life think, technology has solutions for all the ills within enterprise IT departments. There is technology solution for every problem. Something does not work, use that tool, automate this, do straight thru processing.

After a while they realizes no amount of technology is going to help unless there are proper processes. This is second stage of evolution, now with technology, processes are deemed necessary. Our IT pro is on process building spree. He may even build processes to define processes. And then after toying with technologies and processes for a while, he realizes it is not really having desired effect on enterprise IT scenario.

Thats when he realizes importance of people. Empowered people, who has bought into your ideas can make things happen. This is the next stage of evolution. Everything is achievable by right kind of people, we dont need any technology and/or processes.

And then when this people only approach fails miserably then our IT pro achieves his nirvana by recognizing that it is the judicious mix of right amount of technology, effective processes and efficient people is what makes IT work for an enterprise.

It is very important for an enterprise architect to bear in mind, this evolution. For he has to deal with folks at different level of evolution. He will have to work with many technology task masters, a few process pundits, fewer people's politicians and even fewer who have achieved IT nirvana. To keep delivering right enterprise arcitecture and sustain it, he has to take all these people on board and make use of their enthusiasm and leanings properly. Within themsalves they present right mix of people to define, build and govern an efficient enterprise architecture organisation.

Needless to say an enterprise architect must have reached this IT nirvana himself, to realize this.

Friday, September 29, 2006

Enterprise architect, solution architect, whats the difference?

I see most architecture practices have progression marked from solution Architect to enterprise architect.So what does it take to make this progression?Is it, for example, that once you have been solution architect for 'n' projects you are qualified to become enterprise architect? or is it if you have been solution architect for 'n' types of project hence you can become an enterprise architect. Or is it analogus to a caterpiller becoming a butterfly? Is there a moment of Zen, when a solution architect becomes enterprise architect? Can an enterprise architect descend to become a solution architect? Is it really a descent?
Let me attempt to answer these questions per my understanding.To me a solution architect provides a framework so that a sound solution can be designed and implemented. Since a solution typically spans multiple orgnisational entities within enterprise, the framework thus established, in a sense is valid for entire enterprise. So what value does an enterprise architect add, over and above this? An enterprise architect has to set up such a framework for entire enterprise (and not restricted to some entities within it). A solution architect has some freedom in setting a framework for his solution, based on overarching framework for enterprise. He can override enterprise wide framework, if his solution so demands, after following governance protocol. A solution architect can extend the framework and make it more granular. That is, enterprise wide framework will be more coarse grained, whereas solution level framework will be more fine grained. This solution level framework will have some reusable parts, which can be envisaged in any solution. Those should be moved to enterprise framework. Lifecycle changes happen in a solution level framework till the solution gets deployed and then the framework is frozen. Whereas an enterprise architect has to make sure his frameowrk is deployed rightly across various solutions and govern the changes or diversions from it. He also has to keep evolving organisation wide framework, all the time.
So from an Object Oriented viewpoint a Solution architect is a base class (appears counter-intuitive). An enterprise architect is a derived class. An instance of enterprise architect is also an instance of solution architect, but an instance of solution architect is not an instance of enterprise architect. Once a soluition architect develops the ability to genralize and abstract architecture concepts, he can progress to become an enterprise architect.

Friday, September 15, 2006

Requirements management is crucial

It is pertinent to ask how other disciplines of engineering are able to build systems with guarantees, whereas software engineering cannot guarantee anything. Is it due to 'softness' of software or are there fundamental differences?

One fundamental difference between business systems and the systems built by other engineering disciplines is that the later deals with physical world, which is well understood. There are precise models of systems, which scale up. We in software engineering equate models with pictures. But what I mean is that they have higher level of abstractions to express the detail. Be it a set of partial differential equations, or a complex formula of that kind. The key is that these are models rather than enumeration. And system requirements can be expressed in terms of these models and measures used in these models.

E.g. one can specify behaviour of physical system using properties such as Pressure, Temperature. Then one can specify requirements in those terms. The implementer knows how components behave for given input and can construct an implementation meeting the requirements. Implementer can find out what will be the value of Temperature given Pressure. So he can decide how to set pressure so as to achieve the expected temperature.

I think equivalent measures for business systems are Data and Process. Unfortunately we have no clue how they behave in isolation or what is their interrelationship. There are no partial differential equations describing behaviour of data and process. So we have to specify the requirements as enumeration over data and process.
Imagine if an engineer had to specify what will be Temperature for every value of Pressure, and then describing what Pressure range he wants the system to operate in with what max. temperature. Yet, this is what we do as requirement specification in software engineering.

And if your enumeration is not complete, you have a flaw in specification. And nobody can build correct working system, with a flawed specification. Of course we still have a problem of converting a specification to implementation, without introducing any error! The MDA approach is trying to address that. Also manual way of converting requirements into implementation is well understood now. It is a problem, but a better understood one.

Yet there are some systems built using software engineering, especially aviation related or embedded systems (like one found in washing machines), where a more formal approach is adopted. The requirements are specified formally and then validated before implementation is built. It is possible because either the requirement can be specified, as in other branches of engineering, using scalable models or the enumeration is tiny, hence without problem of scalability.

So for business information systems there are following problems,

  1. Abilities to represent the abstract specification (the model) aren't mature
  2. Completeness and correctness of such a model is difficult to establish
  3. Scalability of such a model (model becomes bigger and more complex than the system, for large systems)
  4. Converting such models into implementations (if you can address all of above, then you can use MDA approach to achieve implementation or do it manually)

These are not easy issues to handle, especially item 2 and 3 in the list above.

What is this got to do with Enterprise Architecture? Well, as an Enterprise Architect one is helping build information systems for enterprise while trying to reduce TCO, protect investment and avoid obsolescence. All the modern approaches, which aim to simplify the building and maintaining of information systems, (e.g. ESB, SOA) have a need for requirement specifications, with its own representation mechanisms (e.g. BPMN). If one is not aware of these fundamental issues in systems requirement specifications, then these implementations are not going to succeed. So while one can delegate the implementation part to vendor tools and project teams, EA must take charge of building, managing and governing these requirement specifications as top priority item. This is also a key to better business IT alignment.

Tuesday, September 12, 2006

Services as contract

Recently we had a visit from Bertrand Meyer (Eiffel fame). He mentioned how Eiffel langugae helps make contract explicit between user and supplier. He also elaborated in what all different ways the contract definition can be used. (One of the interesting uses he showed was that of 'push button' testing). When I asked him that how does the language discover the contradictions in contract specification, he said "It does not". Which was kind of disappointing, a wrong contract specification, howevere faithfully implemented by the implementor, is no good.

This incidentally is the point I made in one of my earlier posts. Not only, one must make the contract explicit, as most SOA proponents are proposing, but also we must have ability to discover contradictions in the contract specification. We must have ability to see, if multiple contracts can co-exist and satisfy some common properties. It is not easy. However this is an important aspect of contract specification and must be addressed. The service definition should define how the compatibility would be established between various services (which essentially are contracts) and governance must ensure that this continues to be the case. Such defintion time checks, will help prevent costly budget and time overruns not to mention prevention of service proliferation and governance nightmare.

Friday, September 01, 2006

Enterprise architects must show leadership

I have observed that in most organisation Enterprise Architect (EA) community is viewed as 'ivory tower idealogue' and creating nothing but 'PPTware'. In entire IT orgnisation no other community is derided more than Enterprise Architect community (may be project management community can compete with EA community).

The business and IT leaders sanction Enterprise Architecture organisation and budgets, partly because all analysts point to such a need. But I am not sure how convinced they are about necessity of Enterprise Architecture. The solution implementor(for lack of better word) community always wants to be left alone and do not want to be dictated to, by Enterprise Architect community.

This is not an ideal situation to be in for Enterprise Architect community, where neither your superior nor your sub-ordinates have any faith in you. Is it because Enterprise Architect community always devises these nice 'end games' or 'to be state' or what have you, but fails to lead the IT organisation to that utopia? The probelm arises when in order to reach the end-state, what needs to be done in near future is not spelt out clearly. How does one trade-off the pressures of business changes, changing technology, organisational culture and still work towards the desired end state? Enterprise Architect community must provide practical answers to address these trade-offs without losing site of end state. This is very difficult.

Just to site an example: In an IT organisation, I was working with, all funding was tied to business benefits. Now some of the infrastructural projects, that needed to be carried out in order to reach a desiraable end state, did not have any chance to be implemented. Because cost benfit analysis will stack up heavily against such project. One cannot blame business for having such stringent benefit centric approach, because in the past business had burnt millions without IT producing a single usable artifact. An Enterprise Architect needs to tread thru such situations, and provide viable and practical approaches.

This is, in essence, challenge to Enterprise Architect community and when Enterprise Architect community successfully tackle these situations, it will gain the respect of overall IT community. So the job does not end with defining Enterprise Architecture, but it is a mere start. The real challenges are in governance and deployment of Enterprise Architecture and showing necessary leadership.

Monday, August 21, 2006

SOA Questions answered

I am thankful to Sam Lowe for pointing out work going on at OASIS forum, which seeks to address core issue of service definition. The published SOA reference model does provide necessary conditions as to what a service definition entails. These definitions are more from point of view of a service provider. They state, if I have a capability, which I want to expose as a service what minimally I must provide for the capability to be successfully exposed as service. This will help address questions such as

  • What is its execution context (i.e. pre and post conditions)

  • What are the expected service levels? Etc.
There is one more addition I would love to see to this reference model. This is more from the point of view of a consumer, which in turn can help service providers in service evolution. This addition will help answer questions for a service consumer such as

  • If I do not have the right service that I need for the task at hand, can I choreograph existing services into a composite service using services I have?

  • Will this composite service have required functional and non-functional properties?

This would need specification of composability in both functional as well as non functional sense. I should be able to specify a ‘Plan of Action’ using services and should be able to specify functional and non-functional constraints on this PoA. This PoA is not necessarily only temporal (e.g. like a process) but can also be structural too. (E.g. like a ‘Join’ in relational world). Reference model then can constrain the service specification such that using service specification I should be able to check viability of this PoA.

Usages of this PoA are manifold.
E.g.

  1. A consumer can request a PoA to be executed instead of individual services. SOA infrastructure can execute this PoA, possibly using an ESB. This specification will help ESB infrastructure to do necessary optimizations, to provide required non-functional capability.

  2. If there are enough consumers asking for this PoA a service provider may decide to provide this PoA as a composite service. Thus helping service providers evolve the services that their consumers demand the most.

Tuesday, August 08, 2006

SOA - more Q&A

As far as SOA goes the predicament of service granularity is going to haunt everybody.What level of services? Would we have elementary services, which then are composed into composite services usable by services consumer? Or lets not worry about granularity, just define the services from consumer perspective and be done with it. Oh, what about evolution of services? What abput multiple perspetives? Questions galore, no answers...
IBM fellows do talk about this and in general about maturity of SOA. They have put some material which touches upon these areas. I hope they have some answers.
Interestingly IBM's SOMA (appears to be an hybrid approach which borrows from MDA) looks most promising. But as IBM says, the whole area is so immature. For now, it might suffice to say that SOA is a journey and may talk a long time to finish it. Have patience, this is the right path but a slightly longer one.

Friday, July 21, 2006

Walled web 2.0

More I think about web 2.0 and SOA, more I get concerned. What with loosely coupled services available anywhere, anytime replacing the applications. Is it all desired?
Well I guess, when the hype dies down, the CIOs will realise the necessity and usefulness of these ideas are bounded within some context. One does not really want to make services available to all and sundry and create unpredictable demands for your infrastructure. Not to mention, security and privacy nightmares.

As is with any networked system of systems, one must draw boundaries. One must define authorisation, ownership and access rights withing these boundaries. There really cannot be anywhere, anytime services. The services are walled, the Web 2.0 is walled. Or else it is pretty nigh unusable.

Friday, July 07, 2006

What does Web 2.0 mean for enterprises architect

A very concise defintion of Web 2.0 is that it treats web as a platform and lets user control the data. There are services instead of applications, and user composes applications from these services per his need, using web as platform. The particpation from various sources to achieve a result collaboratively is another core theme of Web 2.0 combined with a better usability and richer experience.

What really does Web 2.0 mean to an enterprise. Is it for real? What are the benefits for enetrprises accruing because of Web 2.0? What are the pitfalls?

There an any number of instances in an enterprise when you hear users complaining about lack of availability of data. Sometimes its available at wrong granularity, sometimes it is not as current as required and sometimes it is not available when required. Surely those users will be elated at definition of Web 2.0. The authorisation and authentication is the only thing between user and data. However, the granularity guarantee is another tricky bit. If suppliers dont get it right, data will be useless for users and conseqeuntly there will be differential service proliferation. If it is too granular, performance penalties are to be borne by users. So its not all that rosy. Huh?

In Web 2.0 world, a supplier cannot control usage of its services. So once a service is out there in open, one can not easily change it without affeting known and unknown users. So supplier have got limited chances to get service definitions right. Otherwise there will be horizontal service proliferation.

AJAX promise of desktop quality user experience in Web 2.0 is seen to be believed. But it does open a can of worm on security front. Which needs to be addressed.

Collaboration between stakedolders in an enterprise is sought after. But it needs to be bounded by authorisation and authentication. For example disintermediation between stakeholders, sometimes desired, may threaten the business itself and hence discouraged at other times. What kind of collaboration is allowed and what is disallowed is tricky to define, much less enforce. Data privacy issues are not to be taken lightly and need a serious thought.

So an enterprise architect needs to be aware of these broad issues before plunging headlong into Web 2.0 .

Friday, February 03, 2006

Back Again!!!

Well, I am back again after a long hiatus (Did I spell that right?).
SOA doe not seem to have progressed much.
I still see old issues, and nobody seems to be addressing them.
Focus seems to be getting SOA infrastructure in place. Worry about actual services later.

Well, I feel whole thing is going to unravel like any other previous silver bullets.
All tall claims and no returns. Business folks will be after IT peoples heads after this latest craze dies down. God bless those CIOs taken in by SOA hype...