Sunday, December 23, 2007

SOA Fable: Thermometerman

This story is courtesy James Gardner. The story points out that how a shared service provider sticks to service level agreements (SLA) rather than service consumer's satisfaction with service. The SLA often define objective criteria to measure some attribute of service. And attributes covered by SLA do not essentially imply the satisfaction of consumer. Providing personalised consumer satisfaction, to a large consumer base is considered uneconomic. So service provider sticks to some objective criteria acceptable to large group of consumers and tries to make it a golden mean between service consumer's satisfaction and economy for service provider.

The reason for doing so is economical. Service providers segment their service consumers and for each segment the provider decides which service attributes that segment is sensitive to. Service provider then goes on to optimise those attributes alone for that segment.

In doing this segmentation however, service providers may lump a diverse group of consumers with diverse aspirations, motivations, intentions and temperament, together. The service attributes deemed important for entire group may be sum total of service attributes important to these smaller groups. But these smaller groups may like some other attributes to be addressed which may be missed by the segmentation. So instead of achieving of golden mean between economy and satisfaction it may lead to widespread consumer dissatisfaction.

The alternative to this is to allow consumer to choose which service attributes are important to them. In essence allow service consumer to define their SLA, even on the fly. The availability of such mass customisation ability is the moral of this story.

In an SOA, the service provider can allow consumer to choose the attribute of service which consumer wants optimised, and even allow this optimisation to take place in every interaction if it makes sense. Some combination of attributes are difficult/uneconomical to achieve.

e.g. Service availability and service currency can both not be made 100% at the same time When you provide 100% service availability, you may have to sacrifice currency a little and vice-versa.

Allowing such mass customisation ability might mean more work for service provider. Provider may have to provide different implementations to satisfy different combination of service attributes. Provider may charge differently for each of the combination as well to take care of economics. Whether this is done offline via service contracts or online via an architectural layer doing service attribute brokering, should be left to provider.

Sunday, December 02, 2007

Model driven development

Todd has asked a few question about using models during development of software. Though his questions are from Business Process Model perspective, they apply generally to model driven development as a whole.

Since I have considerable experience in this area, I would like to comment.

In my opinion, modeling does not negate the need for continuous integration nor testing. Unless one can prove models to be correct with respect to requirements using theorem provers or similar technologies, testing is a must. (Writing those verifiable requirements will take you ages, though). And one does need to define appropriate unit in model driven development world for large enterprise class developments, to allow for parallel development. Continuous integration is one of the best practices one would not want to lose when multiple units are involved.

We had defined a full fledged model driven development methodology, with an elaborationist strategy, for developing enterprise class components. We modeled data and behaviour of a component as an object model and then it was elaborated in terms of business logic and rules, before being converted into deployable artifacts. We did it this way because business logic and rules were considered too procedural to be abstracted in terms of any usable modeling notation, but that has no bearing on discussion that follows. The methodology allowed for continuous integration during all phases of development. We had defined component as a unit for build and test. These units could be version controlled and tested as units. Since it was the complete software development methodology same models were refined from early analysis to late deployment. Component as a unit however made sense only during build and test phases. For requirement analysis and high level designs different kinds of units were required. This is because during analysis and design different roles access these artifacts and their needs are different than people who build and test.

Lesson 1: Units may differ during different phases of life cycles. This problem is unique to model driven techniques, because in non-model driven world there is no single unit which goes across all phases of life cycle. If you are using iterative methods this problem becomes even trickier to handle.

We found that models have a greater need for completeness than source code and cyclical dependencies cause problems. That is, equivalent of a 'forward declaration' is very difficult to define in model world, unless you are open to break the meta models. (e.g.) A class cannot have attributes without its data type being defined. And that data type being a class depending on first class to be ready. I am sure similar situation will arise in business process modeling too. This had a great implication on continuous integration, because these dependencies across units would lock everything in a synchronous step. It is good from quality perspective but is not very pragmatic. We had to devise something similar to 'forward declaration' for models. I think I can generalise this and say that it will apply to all model driven development which follows continuous integration.

We had our own configuration management repository for models. But one could use standard source control repository, provided tool vendor allows you to store modeling artifacts in a plain text format. (well some source code tools are tolerant of binary files as well, but you can't do 'diff' and 'merge'). Devising a proper granularity is tricky and point above should be kept in mind. Some tools inter operate well with each other and provide nice experience (e.g. Rational family of tools). Then your configuration management tools can help you do a meaningful 'diff' and 'merge' on models too.

Lesson 2: Appropriate configuration control tool is needed even in model driven development

Need for regression testing was higher because of point above. Every change would be rippled to every other part that is connected with it, marking it as changed. Traditional methods would then blindly mark all those artifacts for regression testing. Again it was good from quality perspective, not very pragmatic though. We had to make some changes in change management and testing strategy to make it optimal.

Lesson 3: Units need to be defined carefully to handle trade off between parallelism and testing effort during build phase.

In short model driven methods tend to replicate software development methodology that is used without models. Models provide a way to focus on key abstractions and not get distracted by all the 'noise' (for want of better word) that goes with working software. That 'noise' itself can be modeled and injected into your models, as cross cutting concerns. In fact based on my experience with this heavy-weight model driven approach, I came up with a lighter approach called 'Code is the model'. Which can even be generalised to 'Text Specification is the model' and this code v/s model dichotomy can be removed as far as software development methodology goes.

Now a days some modeling tools have their own run time platforms, so models execute directly on that platform. This would avoid a build step. But defining a usable and practical configurable unit is a must. Then defining a versioning policy for this unit and defining a unit & regression testing strategy cannot be avoided. When multiple such modeling tools with their own run time platforms are used, it would provide its own set of challenges in defining testable and configurable units. But that's a topic for another discussion!

Friday, November 30, 2007

SOA Fable: Do you know the intent?

Before advent of SOA, enterprises opened up their central systems to a limited set of users, thru initiatives called 'Extranets'. The idea being stakeholders can have a limited visibility into workings of enterprises which would help them in their businesses, which in turn help the enterprise.

So this giant automobile company opened up its inventory to its distributors, and distributors could place orders for spare part based on inventory they saw. Soon the company realised that some distributors were misusing the facility to hoard critical spare parts when inventory was going low, and making extra buck by selling those parts to other distributors for a premium.

Company stopped this facility. To company's dismay some clever distributor found another way around. He started placing orders for ridiculously large number of parts, system started returning error response with a suggestion for maximum number he could order. The maximum number being level in inventory. This was worse than earlier situation. Not only distributor was getting the information he wanted, he was loading the central system unnecessarily - by continuously running this service for different parts and forcing error response.
Then because of some unrelated change, company stopped giving correct level of inventory in error response. So this distributor started making a lot of orders starting with a high number and gradually coming down to correct level, while doing a binary search. This was worse than earlier situation. It has started creating a lot of orders into system, which launched associated work-flows and then cancelling them - which launched even more work flows.

Till the last situation arose, the problem was handled as IT capacity problem and mis-deeds of distributors were not caught. The last hack tried by distributor nearly broke the back of central systems. Which launched massive investigation to uncover these mal-practices.

What has this got to do with SOA?
Well, one vision for SOA is that SOA can make enabling services available to consumers. Once consumers have enabling services available, consumers can compose the applications they need. Even if we keep aside the doubts about how does one define these enabling services, the issue of consumer intent is a big issue. It need not be always a malafide intent. Some consumer running an innocuous service, multiple time to get the data set he wants in order to do analysis he wants, can break the back of transactional systems. This has nothing to do with services policy. The consumer is willing to abide by policies set for the service, but his intent is not the one envisaged by service provider. And in this era of 2.0 this is bound to happen. SOA needs to take cognizance of this issue and handle it. No, CAPTCHAs are not a solution when services are meant for consumption by machines rather than humans.

Moral of this story is, in a true SOA, the architecture must

1. have a mechanism to diagnose violation of normal intent by consumers
2. provide alternate service implementations for genuine exceptional intent
3. provide a seamless switchover from normal intent to exceptional intent
4. provide incentives to promote normal intent and disincentives to reduce exceptional intent - when it makes sense to do so.

Thursday, November 15, 2007

People or process?

This is an interesting debate going on, that of people versus process in enterprise architecture function. Now that we are on subject let me bring in my perspective.

Legend has it that there were weavers in Dhaka region of Bangladesh, who could weave cotton cloth so fine that a nine yard piece would weigh less than 50 grams. Well that art is lost, because those weavers would never codify that process of weaving. So that knowledge passed only between generations thru word of mouth. When pitted against cheap cloth from Manchester, their better quality cloth lost market share to cheaper but not so great cloth. Eventually the whole craft died. It need not have been so, have they codified the process. At least craft would have survived and could have been revived in these days of green trade/fair trade.

Moral of the story,
1. Not codifying process is no guarantee against obsolescence.
2. Someone who can employ a repeatable process wins in a mass market over someone without process. (And enterprise IT is by no means a niche market).

Having said that I do believe there are job functions which cannot be fully codified by processes. I believe enterprise architect’s is one such job function. But this job can still be divided into pieces that are more defined and pieces which need bit of experience and expertise. The defined parts can then be codified and handled with slightly lesser skill levels. With this approach a full fledged enterprise architect, with some help can do job of many enterprise architects.

Nobody would have been interested in this model, if there was supply of good enterprise architects. Problem is there are not enough good people going around. This is a way to make do with what you have got. And yes it does work in enterprise architecture function too.

Saturday, November 03, 2007

EA and structuring IT change portfolio

There is no steady state for EA function, as EA has unenviable job akin to converting a propellar driven aircraft into a jet aircraft, while flying. And it does not stop there, by the time EA managed to turn into a jet aircraft, new scramjet is already available and waiting for being adopted.

To cope with this constant churn, the usage of EA frameworks is widespread. But, as I had pointed in my earlier post that results on ground are only thing appreciated by end-users. This post by Tom re-emphasised the point about operationalising the Enterprise Architecture as had this post by Sam. Operationalisation of EA is what is capable of producing the results, no matter what framework was used to structure thinking and what artefacts were built. Unfortunately after spending lot of bandwidth in think and build activities, there is little spare bandwidth available for operationlising the EA.

In a large enterprise, structuring the business IT change portfolio is a great opportunity for EA function to make sure EA is operationalised. Its where the lack of bandwidth affects the EA function. Somehow this important activity uses arkane practices similar to waterfall model followed in SDLC. It is no better than business folks placing their bets on initiatives, based on gut feel of a few individuals.

It need not be so. If you look at large scale picture of enterprise IT, it is a fractal. So what works for structuring changes for a single application can work for enterprise IT too. For an application change, one carries out impact analysis, based on functional and non-functional requirements to figure out the changes required. Similar approach can work for enetrprise IT too. The functional requirements coming from business and non-funcitonal requirements coming from EA group. There are two critical differences though. Firstly requirements for an application are far more crystalised than business requirements meant for enterprise IT. Secondly the understanding of single application is much more rigorous than entire landscape of enterprise IT. Any approach used to structure the IT change portfolio must bear these facts and evolve a strategy to handle it. Any investment made in this area will go long way in establishing EA credibility within organisation.

Wednesday, October 03, 2007

Conform or co-exist?

This post suggesting a need for extreme standardisation needs to be discussed within industry. The idea of having extreme standardisation is extremely appealing but highly impractical. Pulls and counter-pulls within industry will make sure that there are always competing standards.

Vendors dont agree on Who decides on standardisation. There are multiple competing bodies even in standard setting space. Who decides what are the categories for standards? Should everything be in standard. That would imply commoditisation. What will vendors bring in as differentiation? There are no easy answers. If we ignore these questions then we will end up with surfeit of standards on top of existing mess.

As they said in vedic period "pinde pinde matirbhinna". Translated in English it means, every soul has different thinking. Implied in that is the notion that no thinking is superior or inferior, so who is to say which one should become standard? Each different specification mechanism or execution methodology is mostly in response to something desired by user community. So in some sense they represent user's aspirations (and vendor's motivations). How can these different motivations and aspirations be reconciled?

So what is the way out? Well there are ways of tackling these tricky questions.

For example instead of relying on extreme standardisation, industry bodies can work on interoperability standards. To do that, first thing as an industry we must agree on means to define categorisation and 'Domains of discourse'. We can then go on to develop interoperability standards for these various domains (intra domain as well as inter domain). Within IT there are various domains of discourse, for example if you consider IT change management category, the domains could be business strategy, IT strategy, portfolio planning, requirement & scoping, design & build, verification & validation, roll out & operational management, systems retirement etc. Similarly for different categorisation there will be different domains of discourse. Each of these domains will have many competing specification mechanisms and execution methodologies. What we need is means to interoperate between these specifications and methodologies for a domain of discourse and across domain of discourse.

With such interoperability standards in place the deliverable from one specification mechanism can be used to drive deliverables which use another specification mechanism. Similarly activities from one methodology can be used to trigger activities of other methodology. If we achieve this, as an industry, it will be a great leap forward. (MOF - meta object facility by OMG, is close enough example of an interoperability standard).

Or if we are lucky a dominant de facto standard may evolve in every domains of discourse. And if it is far superior to any competing ones, as had happened in case of RDBMS and SQL, then it can become a de jure standard.

So however appealing the idea of compliance to well defined standards, I am afraid we have to learn to manage myriad of different standards and help them co-exist.

Wednesday, August 22, 2007

SOA Fable: Do you wear a watch?

Most of us wear watches, and except for those who wear it as a piece of jewellary, we expect a service from this piece of equipment. And what is the unique service provided by this equipment? It lets us know what time it is! Wait, hang on. Aren't there enough service providers, providing this very service, around us? Well every car has a clock in it. every house has many clocks, every PC has a clock. Even every mobile phone, PDA, iPhone has a clock. Why do people wear watches, anyway?

What are your expectation as a consumer from a possible service telling you waht time it is?

Availability: Service must be available when consumer needs it.
If a consumer relies only on mobile phone and battery runs out. The service is not available when consumer may need it.

Reliability: Service response must be reliable and should not require double checking.
A consumer is not sure of the time shown by a clock at public place, he needs to double check it with another reliable source anyway.

Accessibility: Consumer can access the service whenever he needs it.
A clock in laptop in backpack is not easily accessible in a crowded commuter train.

Trust: Consumer trusts that service is backed up by accountability on part of a service provider. When it is a question of life and death, one can trust service provided by one's own piece of equipment to be accurate, reliable and available than a general purpose service. As a consumer one tend to trust no one but oneself as most accountable service provider.

This is a very important insight which helps one prepare a proper versioning policy.
Without that trust versioning schemes can be mis-used for creating specialized services.

In enterprises, since SOA is mandated, project owners will use services. But they will make sure that they get their own private version of a service. This is quite easy by getting a veto power over life cycle of a service and mis-using governance for this sake. Assume a service version 1 is in use. Second consumer wants to create another version, because it has additional needs. This version 2 is derived from version 1. Now first consumer wants an upgrade to his existing service. But he is not willing to accept service version 2 as base for its next version. He will find any execuse to make sure he gets a service version 1.1 rather than 3. This pattern will keep repeating. And soon there will be a lot of versions, changing only in second qualifier. So you will get, what I call parversion (parallel version) anti-pattern.


SOA governance must make sure it guards against this anti-pattern and creates appropriate policies and controls to minimize and eliminate its occurences. Morover governance must recognise that this is a symptom and root cause lies somewhere else.

Moral of the story: Consumer is willing to bend rules to get an acceptable, reliable, accessible and trustworthy service. Watch out for such rules violation and fix root cause.

Sunday, July 15, 2007

Blogged for an year

It has been an year since I started to blog regularly (well at least once a month). The blog has given me a platform to express my ideas. I expected a lot of interaction with fellow professionals. That expectation has not been met, mainly because of my inability to continue conversations. Carrying out conversations over blogs need a lot of commitment. Balancing work, personal commitments and finding time for this activity been a challenge for me. For that reason I am in awe of bloggers who blog daily and on variety of subjects.

But whatever limited interactions I have had, opened my eyes to new ways of looking at things. I am ever thankful to blogosphere for enriching my professional life. I hope to meet some of my on-line acquaintances, whose thoughts I found insightful, in person. I also hope someone, somewhere might have found my thoughts useful and feels the same about me.

Sunday, June 10, 2007

Not whether but when to REST

There is a debate starting to rage about whether REST or SOAP/XML-RPC is better choice for services. Following is my take on REST v/s SOAP/XML-RPC debate, in traditional enterprise computing scenarios.

From whatever I have read till now, my opinion is that REST is quite close to a distributed object oriented abstraction than a service oriented abstraction. Following table tries to bring out similarities between REST and OO abstraction.

RESTOO
In REST there are resources in OO you have Classes and Objects
In REST there are HTTP methods(PUT, DELETE, POST, GET ) for lifecycle managementin OO you have facilities provided by OO language (Constructor, Destructor, method invocation, accessor methods)
In REST resources keep application state informationin OO object represents the state
In REST type safety is provided by XML schemain OO type safety is provided by pre-shared class definitions (e.g. using .h files or java .class files)
In REST dynamic invocation is possible because of repository in OO dynamic invocation is possible because of reflection.


Of-course REST provides a more open distributed object oriented mechanism than say CORBA or EJB. It does it by usage of XML schema for marshalling/unmarshalling and open protocol like http (as against dcom, iiop or rmi).

But it is bound to face the some of the problems that distributed object oriented mechanisms faced. e.g. granularity of objects, scalability related issues, differing consumer experience based on lazy or early instantiation of resource graphs (object graphs in OO).

REST is an interesting way of implementing distributed object oriented mechanism, and there are times this abstraction is better suited than pure service oriented abstraction. So in my opinion debate should not be either REST or SOAP/XML-RPC, but when to use REST and when to use SOAP/XML-RPC. The limiting factor for time being is availability of tooling and skills. Over period of time these will develop and then, within enterprises both can co-exist.

Governance

Well I must share this experience on how effective governance control reduces to inefficient buerocratic controls.
This happened in a small airport, where I had gone to attend a conference.

While boarding the plane we were called in, by row numbers. Only hitch was, the plane being small, we were taken in a bus to the plane. So we were boarding the bus in sequence of our row numbers and not the plane. Once in the bus, people sat wherever they wanted to. Once they got out of the bus to board the plane, they were in random order. This defeated the whole purpose of boarding by row number.

Boarding plane in a order of row numbers makes it very efficient when passengers board the plane directly. But in case of boarding by bus, either the procedure needs to be changed to have desired effect or abandoned.

I think there is an important lesson in this, for people who design governance controls, for systems.

Monday, June 04, 2007

SOA - Necessary and sufficient

SOA is heralded as the 'must have' for business agility. I agree to a point. SOA is necessary but not sufficient to achieve the highest degree of business agility. Let me explain, why I think so.

In service oriented world, information systems try to be congruent with business world, providing information services in support of business services. The business organisations provide business services in order to carry out the business activities. These business services are steps within business activities and they use information services provided by underlying IT infrastructure.

However underlying IT infrastructure is not amenable to this business service oriented paradigm, fully. At implementation level, IT infrastructure has to deal with non-functional properties, such as responsiveness, scale, availability, latency, cost, skills availability etc. That imposes certain restrictions on implementations. E.g. For scale reason we normally separate behaviour and data. Behaviour (as represented in business logic and business rules) scales differently than data (and data stores - databases, file systems). That’s why in a typical distributed information system, there are more database servers than servers dedicated for executing business logic.

In service oriented world, information service provided by information systems need to mask such implementation issues. The idea that SOA will provide business agility will hold true, iff information services enable business services, use disparate information systems seamlessly. In SOA world, business services should lend themselves to rapid re-organisation and redeployment, in terms of business activity volumes, business responsiveness, speed of new product/service introduction etc.

The current thinking seems to be that a set of open standards, enabling integration between disparate information systems is all that is needed. With such integration mechanism, one can create a facade of a business service, using underlying disparate information systems. Hence the emphasis is on XML schemas, in-situ transformations, service choreography and to extent mediation [between required business service and provided information service(s)].

To me this is part of solution. It is the necessary condition but not sufficient.

As I had posted in past, one really does not know what should be granularity of information services. If you provide too granular information services, you would be better at reorganising but will be hard pressed to meet non-functional parameters. If you provide good enough services for current usage, satisfying non-functional parameters, you will have tough time reorganising. So for all practical purposes, for any business service change, there are possible information service related changes, rather than just reorganisation of information services.

That would mean, the agility of business service reorgnisation comes down to the change management in information systems. If you make pragmatic decisions in favour of speed of change, it leads to duplication and redundancy. If you try to keep your information systems pure and without redundancy, you sacrifice the speed of change.

So the key appears to be


  • getting your information services granularity just right for all possible reorganisation that would be needed by business. You cannot really know all possible business changes, but you can know up to a certain time horizon. So that you are just re-organising your information services rather than redeveloping.
  • if this is not possible or considered risky, you can take a re-factoring oriented approach. And incrementally build the service definitions.
  • and whenever you change information systems (because despite your best efforts business came up with a change that is not possible with current information service definition), use MDA or Software factories (or any other conceptual to implementation mapping technology) to effect the change from conceptual business services onto its IT implementation. This would bring down the time to make changes. And also would enable you to make pragmatic decisions, because even if there are duplications and redundancies at implementation level, the conceptual field is clean and pure.

That would be complete SOA for me.

Wednesday, May 30, 2007

Service Aversion to Service Orientation

Well I have a slightly different take on Service Averse Architecture. It is based on my experience with Banking Financial Services and Insurance (BFSI) industry and may not be generalised to other industry segments. The information technology (IT) was introduced in BFSI to improve operational efficiencies. If you look at the value chain, within BFSI, viz. manufacture-market-sell-service-retire a product or a service, IT was primarily required to take care of ‘service’ part. As long as IT expenditure was less than the operational efficiencies it provided, enterprises were happy, notwithstanding delays and budget overruns. Since IT was not commoditized then, whoever could cross the barrier to entry, benefited from IT (despite cost and time overruns of IT).

Interestingly enterprises within BFSI were always ‘Service oriented’ in their business. They did provide specific services to their stakeholders. The problem was always with the information systems they used to support these services they provided. There was a big mis-alignment between services that business provided and info systems they used to provide these services. These info systems were always monolithic and closed. It was these info systems, which distorted underlying service culture of business. And these ill-fitting information systems were result of what Todd would call ‘project culture’.

Interesting point is how business which itself operated services for its stakeholders, was taken over by this project culture and created ‘service averse architecture’ in information systems. It was mainly due to, aura and geeky culture associated with IT. The business leaders did not understand IT, but understood its importance. So they gave a free reign to IT leaders. Initial IT leaders did not have much understanding of underlying businesses, so they were in the mode, “Tell us exactly what you want done, and we will do it!” Unfortunately what business wanted done was always a small piece of big puzzle. Hence multiple monolithic closed information systems, handling parts of services that business was delivering to its stakeholders, were developed.

Now that IT is commoditized and barrier to entry is lowered considerably (well buying a mainframe used to be a momentous decision for CEO and now any IT related decisions are hardly made by CEOs), cost and time have become important. And, IT has penetrated the other aspects of value chain, notably market, sell and even manufacture (which uses business intelligence tools). So IT has become more important to business at the same time business has become less tolerant of IT’s pitfalls.

Also, over the years IT folks started understanding business in more details and they started asking “Why do you want it done this way?” rather than just following orders. It is, what my friend Ravi would call a shift from output oriented to outcome oriented mindset. So when business and IT finally started coming closer to each other, they started appreciating need for alignment between two. SOA in my opinion is vehicle for that. SOA helps IT recast itself, in business terms.

Most of the organisations out there have ‘Service Averse Architecture’ within their information systems. And the organisations that are doing transitions to SOA are the ones where the IT leaders have made that paradigm shift from output to outcome-oriented mindset. These are the leaders who understand importance of business and IT alignment and how SOA can help achieve that.

Unfortunately leaders buying into SOA vision is just part of the story. It would mean enterprise is willing to make transition to SOA, but whether it will be done successfully or not, depends on changing entire organisational culture from undue competition to more of trust and co-operation.

Wednesday, May 02, 2007

Commitment v/s involvement

A recent conversation with one of the project managers tasked with delivering services for enterprise (SDT -services delivery team), reminded me this old adage about commitment and involvement. The project manager said, "He is delivering to agreed project requirements and has put in a change control in place to handle the churn in requirement." So he believed SDT was totally involved with project for success of project and SOA initiative. The old adage goes thus, "In a breakfast of ham and egg, the chicken is involved but the pig is committed". What we need is not involvement but commitment (kind displayed by pig) from SDT.

The requirement for a project invariably change. The service delivery team, being a subcontractor ro project team, is going to be last team to know about the changed requirements and the first one expected to be delivering its part due to changed requirements. So the SDT is going to be the fall guy for all the problems the project team is going to face, because SDT will not be able to deliver after being placed in such a precarious position.

So what we need from SDT, is commitment to project than involvement with project. Knowing and anticipating changes to requirements, from services perspective will be a key challenge. The major reasons for requirements changes I have seen in past is becuase,

  • Wrong people giving requirements
  • Strategy not getting properly articulated down the line, leading to wrong requirements

The first part is addressable by project team by involving all stakeholders, appropriately. Its the later part which results in major problems. Most business projects are result of some strategic decisions from executive. The way startegy gets articulated from top executive down to people executing projects, is nothing better than 'chinese whispers'. Every link in the chain adds their own spin, and overloads elements of strategy with their own agenda. The SDT, being an enterprise wide intiative, should be able to rise above these spins. So SDT needs to be committed to project success by being on par with projects, rather than being happy with a subcontractor status. Thats the only way a SDT approach will work in a large enterprise. SDT should be treated as one of the startegic projects undertaken by business, than a mere service provided by IT for rest of the projects.
This will guarantee SDT's commitment (of pig variety).

Tuesday, April 24, 2007

AGILE Outsourced

AGILE is a space Scientific Mission devoted to gamma-ray astrophysics supported by the Italian Space Agency (ASI), with the scientific and programmatic co-participation of the Italian Institute of Astrophysics (INAF) and the Italian Institute of Nuclear Physics (INFN).

AGILE was launched by the Indian PSLV rocket from the Sriharikota base (Chennai-Madras). The launch was made, as planed, on 23th April 2007 at 12 00 a. m.
Every process was executed correctly.

Video is now available

Tuesday, April 10, 2007

Centralised service delivery team

Typically in business IT world, budgeting and portfolio definition phase determines what business benefits should be targetted for in a given period and what is the maximum cost that can be paid to achieve those benefits. Business projects are then executed to deliver business benefits while minimizing cost and time to get those benefits. So notwithstanding the various benefits of building reusable services for enterprise, projects tend to do point solutions. Mainly because of these budget and time considerations.

In response to this situation, one of the best practices that has evolved in SOA implementations is to have a central team implementing services for business projects. This has been a good idea. It can address many of the governance challenges. Mostly because this central service team (lets call it services delivery team - SDT) is managed and governed by principles set by enterprise architects for SOA, rather than budget and cost considerations alone.
However there are a couple of challenges in putting this to practice.

  • Having a right engagement between project team and this SDT is a big challenge. This engagement model breaks or makes the future of SOA, in the enterprise. It is vitally important to get this model right.
  • Getting the composition of this SDT right is another challenge, that must be addressed.

Initially the SDT was set up as an island, which owned the services and its interfaces. The SDT utilised existing capabilities or whenever necessary got the capabilities built. The engagement between project team and the SDT, was mainly of an outsourcer - outsourcee type. The project team defined its requirement and threw those over to SDT. Which then worked out the interfaces (keeping in reusability in mind). Then further outsourced building of capability required (if ncessary), to teams maintaining the systems which might provide the capabilities. The SDT had only application architects and high level designers apart from project managers. So the role of this SDT had become that of a systems integration designers and SOA standard tools had become SI tools.

This mode of working had turned into a waterfall life-cycle model, partly because the way engagement worked. This started to add a lot of overheads in terms of time and effort to project estimates. At the same time, benefits were not immediately realised for projects. As a result project team started resisting this engagement model, which in turn was viewed as rejection of SOA by project teams (which it was not). When project managers are measured by budget and schedule compliance alone, it was natural for them to resist this engagement model which took away the control of deciding how much budget and schedule risk they can afford to take. So SDT too are facing problems.

I think a new co-sourced engagement model needs trying out. In this model, services delivery team works as part of project team, governed by same budget and schedule consideration. But they are also governed by SOA principles too. So when these two governance principles contradicted, comporomises were required to be made. These compromises were inevitably are made in favour of project. Rarely in case of strategic services, the compromises are made in favour of services. These compromises, in their wake, will leave what I call 'non-'services. Some of these non-services need to be refactored into services by a part of service delivery team, which gets its funding straight from CIO. This team has limited funding, hence could only turn so many non-services into services, resulting into a back-log. Over a period of time, size of back-log would give a fair understading of the costs involved in making tactical compromises against strategic imperatives. This model needs to run for some time, for its efficacy to be known. For this model to work, it would need strong SOA governance framework and a strong testing facility in place.

Any sharing of experience in this area is most welcome.

Tuesday, February 13, 2007

Camping sites not slums!

Thanks Todd for an elaborate response to my earlier post. In the context of city planning analogy for SOA that we are discussing, we agree that

  1. Slums do get developed
  2. Slums are undesirable
  3. Slums need to be contained and eventually removed.
  4. SOA helps containment and removal of slums

Only point of disagreement is about effective governance. Todd argues that effective governance can make sure that slums don’t spring up in first place. Conversely if slums are springing up, governance is in-effective.

My contention is that such a governance mechanism may be difficult to establish in some cases. Todd has rightly pointed out that different governance models need to exist for organizations with different focus (viz. squeezing value v/s growth). However the organization I was working with was in both these modes simultaneously. It is an organization created out of many m&a in very short time frame. It is common (well this is my opinion and I have no data) for these organization to use one part of merged entity to grow another part of merged entity, while squeezing value out of first part. Moreover value creation (growth) is not always driven by normal value chain (viz. design >> build >> sell >> service). In this particular case a clever business person had figured out a financial engineering plan to release some (millions of dollars) value away from the normal value chain and IT support was required to hasten this plan.

In such cases it appears that it is really difficult to have governance mechanisms reconciling both these (growth and squeezing value) situations. What we had was a 'squeezing value' focused governance mechanism. So many IT asset were created (which were really growth focused) outside of normal (squeezing value focused) governance. There was a danger that these assets were then further utilized by normal (squeezing value focused) projects. So we had a slum and danger of creating an ecosystem that would be developed with slum as its center.

We could avert dependency on slum by some governance scheme. But then there were debates about not utilizing this slum that was already there. The projects not using the slums were seen as redeveloping those capabilities 'unnecessarily'. The slums creation got away without governance controls imposed on it, because it was part of 'growth' focused effort and evaded 'squeezing value focused' governance. But later projects not using the slum were caught in a debate because they were under 'squeezing value' focused governance.

The key questions then is how to have a reconciled governance catering for growth and squeezing value, which will then enable transition assets from 'growth' governance model to 'squeezing value' governance model?

My belief is that answer to this question will provide effective governance as suggested by Todd and SOA will be part of that answer. Then instead of developing slums we'll develop temporary camping sites which do provide some capabilities and are governed. These are transient capabilities waiting to be mainstreamed and made permanent, if required.

If it is not too late, I would love to hear Todd touch upon some of these issues in his upcoming webinar.

Monday, February 12, 2007

City planning and slum control

Since we are talking about Enterprise architecture as analogous to city planning, I thought I can bring in my developing world perspective about how slums develop in a city despite having a nice city planning guide, in order to bring out importance of having policies for dealing with slums as part of city planning guide.

In developing world, we are quite used to slums springing up in a city despite having a city planning guide. A slum is a microcosm of a city, but without any planning or vision. It is like a city within city, with its own ad-hoc rules and regulations. It does provide some useful services to rest of the city. No doubt it is sign of broken governance but it is not just that. Slums survive and thrive because they are needed for proper functioning of city and they do provide some value to the city. However slums are not desirable. City planners must contain and eventually get rid of them.

Similar situations arise in enterprise IT world, despite having a nicely laid out enterprise architecture, and strong governance. This article discusses some such cases of deviation, but lays the blame squarely on ineffectiveness of governance. While that may be true in some cases, sometimes circumstances force one to bypass guidelines and governance. So what we also need is a framework to rectify situation, post-facto. In city planning analogy terms, we need a proper slum control and removal policy.

To illustrate, let me give this example.

In a large enterprise, they have a proper enterprise architecture defined, as intended by Annie Shum's paper mentioned in this article. There are guidelines and governance mechanism. Based on this a multi year IT plan is drawn. One of the elements of the plan is to build subject area specific data related services. The build of these services are governed by principles laid out in enterprise architecture viz. Build should be iterative; TCO should be low, resultant services must be maintainable/flexible/performant etc. It is also tied to business benefits and this capability is sponsored by a business facing project which is planning to use it in a year’s time. Now this programme is midway and another business sponsor comes up with a proposal to build a slightly different set of services for same subject area (where attributes are very specific to the problem at hand, their granularity is not generic enough and their currency is very specific to problem). This new business sponsor has a solid business case; he is promising to add millions of dollars to bottom line, if this new set of services are made available within a short span of time. There is a very small window of opportunity from business perspective and it does not matter if underlying capability does not follow any of the enterprise architecture guidelines/principles or governance processes, as long as that window is utilized. The matter reaches highest level of decision making apparatus within enterprise (e.g. CEO) and you know who would win the argument. So this set of services are built which does not fit the plan laid out by enterprise architecture, it may not use technologies suggested by enterprise architecture (procuring them in time for this solution, is an issue), consequently it may not use solution architecture laid out by enterprise architecture guidelines.

In short this is a slum that is getting developed in this nice city (viz. enterprise IT) governed by city-planning guide (viz. Enterprise Architecture). And as an Enterprise Architect if you protest too much, I am sure you would be shown the door. You cannot argue with millions of dollars added to bottom line. So what should an Enterprise Architect do?

1. Do nothing. Which would mean more such slums spring up and then it is just question of time before governance and consequently enterprise architecture breaks down totally. So it does not appear a viable option.

2. Demolish the slums. This is possible if what was built, was one off capability and not required on an ongoing basis. So as soon as the utility of this capability diminishes below a certain threshold, get rid of it, completely. If required, by being very stern.

3. Rehabilitate the slums. This is the only option if what was built is not a one off capability but is required for enterprise on an ongoing basis. The reason it bypassed enterprise architecture guidelines and governance was to catch a business benefit specific window of opportunity. One cannot avoid such incidents. What we now need is to refactor this capability as per the enterprise architecture guidelines and governance processes. We need to bring in this capability in mainstream of the enterprise IT. In short we must rehabilitate the slums (if required by some amount of redevelopment).

There may be hybrid options based on specific situations, but an enterprise architecture plan as city planning guide, must provide for this eventuality as well. I have seen this type of issue cropping up more than once. So it is difficult to dismiss it as statistical outlier, not worthy of a strategic response.

Sunday, January 21, 2007

CBD, SOA, whats next?

Most engineering disciplines have managed to split their problems into one that of components assembly. Individual components are well defined and engineered to satisfy a very concise set of requirements. The system level problem then reduces to selecting proper components and assembling them to satisfy overall system's requirement.

Doing things this way has its benefits and it is well established by other branches of engineering. Software engineering is trying to do the same, but without much success. Atleast in business IT world. We have tried component based development (CBD) and now trying SOA. But we are not any closer to the pannacea of assembly of components.

What is that constraint, which is preventing software engineering to move to next level? Implicit in this question is an assumption that software engineering wants to move to a component assembly world. Well, I am not sure that is true either. And I am not alone in this thinking, see this post.

So the problem is not entirely technical. It is also of business motivation, and viable business models. In other branches of engineering, craft became engineering, when it promised to bring prosperity to all stakeholders. In software engineering, what is that business model, which will let multitudes of component makers flourish and big guns will just concentrate on component assembly? Is it standardisation effort, that is lacking? Is ultimate end IT user really interested in such component assembly world? Because in business IT world, the enterprise IT is main differentiator for an enterprise, which prevents it from being commoditized as a service or product provider.

These are the hard questions that industry must grapple with. Otherwise, every couple of years we will have, a wave, be it CBD or SOA and as an industry we'll not move much.

Each such wave (CBD, SOA) takes us further than today. But to make that transition to next level we need an industry wide push and not vendor specific attempts. And this must include all stakeholders - viz. business IT consumers, service providers and product vendors. Perhaps OMG can take lead?

Thursday, January 11, 2007

SOA: Reuse, of Interface or Capability?

SOA reusability debate is bit confused. For more clarity, we must understand the difference between service interface and capability behind it. A service exposes some interface which is backed by a capability. From a service consumer's perspective the reusability of interface is enough. Whereas from a builder's perspective the reusability of capability is more important. It is the later which is difficult (or impossible in some cases). It is possible that same interface can be mapped to multiple capabilities. For example an interface can be mapped to multiple capabilities based on Quality of Service (QoS) needs specified in interface. (Well, this is not possible right now in most implementations). The QoS needs like performance, scalability, availability, currency, data quality etc. would determine the binding of interface to capability. Sometimes even functional needs of services may force an interface to be bound to multiple capabilities developed over period of time. The problem is how does one control the proliferation of overlapping capabilities.

The reason why overlapping capabilities get developed are not straight forward. Sometimes it is weak governance, sometimes it is for QoS purposes, sometimes it is trade-off made to achieve results for a business facing project. This post by Todd Biske has elaborated about the last aspect. In practice one can address governance issues but capability duplication cannot be totally eliminated. The QoS needs will force multiple capabilities for same interface in some cases. There may be business drivers which will force a tactical trade-off to achieve larger business benefits. Well, one way to avoid trade-offs is to plan for the future, which I have suggested in this post. Still such overlapping capabilities will get built.

So what should one do? Well, one way out is to refactor and streamline the capabilities that are built. Here SOA would help, as consumers are not affected when capabilities change, as long as interface is same. So one should be refactoring and cleaning capabilities, because up-front reusable capabilties are hard to achieve in a working IT organisations. Business leaders must realize this reality and make funding available for these kinds of activities.

Enterprise IT Oracle

It is very surprising that so many competent and smart people come together in some Enetrprise IT organisation, then create a mess that is unmanageable and beyond their collective capabilities. It is not always a case of inaptitude or incompetence of individuals, but it is the nature of the beast. If we were to understand the problems of enterprise IT, we should try to analyze the root causes.

To me enterprise IT is driven by four major drivers,
1. Own business needs
2. Competative scenario
3. Regulatory compliance
4. Technology changes

These four drivers have different rate of change. Cost of not attending to those changes, is different too. Hence priority to attend to these changes is different. Enterprise IT is like an assembly line in action. And to make matter worse, it is an assembly line which can't be stopped for maintenance. So the four drivers combined with the fact that you never get any breathing space to fix enterprise IT, results in many tactical decisions which finally leads to mess of unmanageable proportions.

Ability of enterprise IT to respond to the drivers is constrained by capability of its own IT organisation, capability of its suppliers and capability of other stakeholders. Enterprise IT does not deal only with planning and design of IT but also about building, governing and sustaining too. This requires collective effort from IT organisation, suppliers and stakeholders, hence any mechanism to avoid the mess must have participation of all.

To avoid the mess EA must plan for future based on these four drivers and remember to make that plan as flexible as the rate of change within most important driver of the drivers listed above. When plan changes, it can accomodate important of the latest changes within other drivers too. The capability of IT orgnisation, supplier and stakeholders puts constraints on any plan that is created. This capability building also must be addressed within the plan itself. This planning is based on tracking of drivers and constraints. A sub-organisation within enterprise architecture community must own this. This organisation can then aptly be called Enterprise IT Oracle (An Oracle is a person or agency considered to be a source of wise counsel or prophetic opinion; an infallible authority. Not to be confused with Larry Ellison's Oracle corporation).

Wednesday, January 10, 2007

Web 2.0 and enterprise data management

This is a great post that every one who would like to understand web 2.0 in enterprise context must read. This article by Sam Lowe provides greater clarity on Web 2.0 and its implications for enterprise data. The interesting point that is made is more than technologies,the ideas behind web 2.0 is what is going to impact the enterprises' way of managing data. The participatory nature of Web 2.0 as applied to data management is a revelation and must form part of agenda that practitioners must shape and develop. To that effect Sam has also suggested to run a workshop. I feel its a great idea and interested people can congregate and dive more deeply into these issues.

Sunday, December 23, 2007

SOA Fable: Thermometerman

This story is courtesy James Gardner. The story points out that how a shared service provider sticks to service level agreements (SLA) rather than service consumer's satisfaction with service. The SLA often define objective criteria to measure some attribute of service. And attributes covered by SLA do not essentially imply the satisfaction of consumer. Providing personalised consumer satisfaction, to a large consumer base is considered uneconomic. So service provider sticks to some objective criteria acceptable to large group of consumers and tries to make it a golden mean between service consumer's satisfaction and economy for service provider.

The reason for doing so is economical. Service providers segment their service consumers and for each segment the provider decides which service attributes that segment is sensitive to. Service provider then goes on to optimise those attributes alone for that segment.

In doing this segmentation however, service providers may lump a diverse group of consumers with diverse aspirations, motivations, intentions and temperament, together. The service attributes deemed important for entire group may be sum total of service attributes important to these smaller groups. But these smaller groups may like some other attributes to be addressed which may be missed by the segmentation. So instead of achieving of golden mean between economy and satisfaction it may lead to widespread consumer dissatisfaction.

The alternative to this is to allow consumer to choose which service attributes are important to them. In essence allow service consumer to define their SLA, even on the fly. The availability of such mass customisation ability is the moral of this story.

In an SOA, the service provider can allow consumer to choose the attribute of service which consumer wants optimised, and even allow this optimisation to take place in every interaction if it makes sense. Some combination of attributes are difficult/uneconomical to achieve.

e.g. Service availability and service currency can both not be made 100% at the same time When you provide 100% service availability, you may have to sacrifice currency a little and vice-versa.

Allowing such mass customisation ability might mean more work for service provider. Provider may have to provide different implementations to satisfy different combination of service attributes. Provider may charge differently for each of the combination as well to take care of economics. Whether this is done offline via service contracts or online via an architectural layer doing service attribute brokering, should be left to provider.

Sunday, December 02, 2007

Model driven development

Todd has asked a few question about using models during development of software. Though his questions are from Business Process Model perspective, they apply generally to model driven development as a whole.

Since I have considerable experience in this area, I would like to comment.

In my opinion, modeling does not negate the need for continuous integration nor testing. Unless one can prove models to be correct with respect to requirements using theorem provers or similar technologies, testing is a must. (Writing those verifiable requirements will take you ages, though). And one does need to define appropriate unit in model driven development world for large enterprise class developments, to allow for parallel development. Continuous integration is one of the best practices one would not want to lose when multiple units are involved.

We had defined a full fledged model driven development methodology, with an elaborationist strategy, for developing enterprise class components. We modeled data and behaviour of a component as an object model and then it was elaborated in terms of business logic and rules, before being converted into deployable artifacts. We did it this way because business logic and rules were considered too procedural to be abstracted in terms of any usable modeling notation, but that has no bearing on discussion that follows. The methodology allowed for continuous integration during all phases of development. We had defined component as a unit for build and test. These units could be version controlled and tested as units. Since it was the complete software development methodology same models were refined from early analysis to late deployment. Component as a unit however made sense only during build and test phases. For requirement analysis and high level designs different kinds of units were required. This is because during analysis and design different roles access these artifacts and their needs are different than people who build and test.

Lesson 1: Units may differ during different phases of life cycles. This problem is unique to model driven techniques, because in non-model driven world there is no single unit which goes across all phases of life cycle. If you are using iterative methods this problem becomes even trickier to handle.

We found that models have a greater need for completeness than source code and cyclical dependencies cause problems. That is, equivalent of a 'forward declaration' is very difficult to define in model world, unless you are open to break the meta models. (e.g.) A class cannot have attributes without its data type being defined. And that data type being a class depending on first class to be ready. I am sure similar situation will arise in business process modeling too. This had a great implication on continuous integration, because these dependencies across units would lock everything in a synchronous step. It is good from quality perspective but is not very pragmatic. We had to devise something similar to 'forward declaration' for models. I think I can generalise this and say that it will apply to all model driven development which follows continuous integration.

We had our own configuration management repository for models. But one could use standard source control repository, provided tool vendor allows you to store modeling artifacts in a plain text format. (well some source code tools are tolerant of binary files as well, but you can't do 'diff' and 'merge'). Devising a proper granularity is tricky and point above should be kept in mind. Some tools inter operate well with each other and provide nice experience (e.g. Rational family of tools). Then your configuration management tools can help you do a meaningful 'diff' and 'merge' on models too.

Lesson 2: Appropriate configuration control tool is needed even in model driven development

Need for regression testing was higher because of point above. Every change would be rippled to every other part that is connected with it, marking it as changed. Traditional methods would then blindly mark all those artifacts for regression testing. Again it was good from quality perspective, not very pragmatic though. We had to make some changes in change management and testing strategy to make it optimal.

Lesson 3: Units need to be defined carefully to handle trade off between parallelism and testing effort during build phase.

In short model driven methods tend to replicate software development methodology that is used without models. Models provide a way to focus on key abstractions and not get distracted by all the 'noise' (for want of better word) that goes with working software. That 'noise' itself can be modeled and injected into your models, as cross cutting concerns. In fact based on my experience with this heavy-weight model driven approach, I came up with a lighter approach called 'Code is the model'. Which can even be generalised to 'Text Specification is the model' and this code v/s model dichotomy can be removed as far as software development methodology goes.

Now a days some modeling tools have their own run time platforms, so models execute directly on that platform. This would avoid a build step. But defining a usable and practical configurable unit is a must. Then defining a versioning policy for this unit and defining a unit & regression testing strategy cannot be avoided. When multiple such modeling tools with their own run time platforms are used, it would provide its own set of challenges in defining testable and configurable units. But that's a topic for another discussion!

Friday, November 30, 2007

SOA Fable: Do you know the intent?

Before advent of SOA, enterprises opened up their central systems to a limited set of users, thru initiatives called 'Extranets'. The idea being stakeholders can have a limited visibility into workings of enterprises which would help them in their businesses, which in turn help the enterprise.

So this giant automobile company opened up its inventory to its distributors, and distributors could place orders for spare part based on inventory they saw. Soon the company realised that some distributors were misusing the facility to hoard critical spare parts when inventory was going low, and making extra buck by selling those parts to other distributors for a premium.

Company stopped this facility. To company's dismay some clever distributor found another way around. He started placing orders for ridiculously large number of parts, system started returning error response with a suggestion for maximum number he could order. The maximum number being level in inventory. This was worse than earlier situation. Not only distributor was getting the information he wanted, he was loading the central system unnecessarily - by continuously running this service for different parts and forcing error response.
Then because of some unrelated change, company stopped giving correct level of inventory in error response. So this distributor started making a lot of orders starting with a high number and gradually coming down to correct level, while doing a binary search. This was worse than earlier situation. It has started creating a lot of orders into system, which launched associated work-flows and then cancelling them - which launched even more work flows.

Till the last situation arose, the problem was handled as IT capacity problem and mis-deeds of distributors were not caught. The last hack tried by distributor nearly broke the back of central systems. Which launched massive investigation to uncover these mal-practices.

What has this got to do with SOA?
Well, one vision for SOA is that SOA can make enabling services available to consumers. Once consumers have enabling services available, consumers can compose the applications they need. Even if we keep aside the doubts about how does one define these enabling services, the issue of consumer intent is a big issue. It need not be always a malafide intent. Some consumer running an innocuous service, multiple time to get the data set he wants in order to do analysis he wants, can break the back of transactional systems. This has nothing to do with services policy. The consumer is willing to abide by policies set for the service, but his intent is not the one envisaged by service provider. And in this era of 2.0 this is bound to happen. SOA needs to take cognizance of this issue and handle it. No, CAPTCHAs are not a solution when services are meant for consumption by machines rather than humans.

Moral of this story is, in a true SOA, the architecture must

1. have a mechanism to diagnose violation of normal intent by consumers
2. provide alternate service implementations for genuine exceptional intent
3. provide a seamless switchover from normal intent to exceptional intent
4. provide incentives to promote normal intent and disincentives to reduce exceptional intent - when it makes sense to do so.

Thursday, November 15, 2007

People or process?

This is an interesting debate going on, that of people versus process in enterprise architecture function. Now that we are on subject let me bring in my perspective.

Legend has it that there were weavers in Dhaka region of Bangladesh, who could weave cotton cloth so fine that a nine yard piece would weigh less than 50 grams. Well that art is lost, because those weavers would never codify that process of weaving. So that knowledge passed only between generations thru word of mouth. When pitted against cheap cloth from Manchester, their better quality cloth lost market share to cheaper but not so great cloth. Eventually the whole craft died. It need not have been so, have they codified the process. At least craft would have survived and could have been revived in these days of green trade/fair trade.

Moral of the story,
1. Not codifying process is no guarantee against obsolescence.
2. Someone who can employ a repeatable process wins in a mass market over someone without process. (And enterprise IT is by no means a niche market).

Having said that I do believe there are job functions which cannot be fully codified by processes. I believe enterprise architect’s is one such job function. But this job can still be divided into pieces that are more defined and pieces which need bit of experience and expertise. The defined parts can then be codified and handled with slightly lesser skill levels. With this approach a full fledged enterprise architect, with some help can do job of many enterprise architects.

Nobody would have been interested in this model, if there was supply of good enterprise architects. Problem is there are not enough good people going around. This is a way to make do with what you have got. And yes it does work in enterprise architecture function too.

Saturday, November 03, 2007

EA and structuring IT change portfolio

There is no steady state for EA function, as EA has unenviable job akin to converting a propellar driven aircraft into a jet aircraft, while flying. And it does not stop there, by the time EA managed to turn into a jet aircraft, new scramjet is already available and waiting for being adopted.

To cope with this constant churn, the usage of EA frameworks is widespread. But, as I had pointed in my earlier post that results on ground are only thing appreciated by end-users. This post by Tom re-emphasised the point about operationalising the Enterprise Architecture as had this post by Sam. Operationalisation of EA is what is capable of producing the results, no matter what framework was used to structure thinking and what artefacts were built. Unfortunately after spending lot of bandwidth in think and build activities, there is little spare bandwidth available for operationlising the EA.

In a large enterprise, structuring the business IT change portfolio is a great opportunity for EA function to make sure EA is operationalised. Its where the lack of bandwidth affects the EA function. Somehow this important activity uses arkane practices similar to waterfall model followed in SDLC. It is no better than business folks placing their bets on initiatives, based on gut feel of a few individuals.

It need not be so. If you look at large scale picture of enterprise IT, it is a fractal. So what works for structuring changes for a single application can work for enterprise IT too. For an application change, one carries out impact analysis, based on functional and non-functional requirements to figure out the changes required. Similar approach can work for enetrprise IT too. The functional requirements coming from business and non-funcitonal requirements coming from EA group. There are two critical differences though. Firstly requirements for an application are far more crystalised than business requirements meant for enterprise IT. Secondly the understanding of single application is much more rigorous than entire landscape of enterprise IT. Any approach used to structure the IT change portfolio must bear these facts and evolve a strategy to handle it. Any investment made in this area will go long way in establishing EA credibility within organisation.

Wednesday, October 03, 2007

Conform or co-exist?

This post suggesting a need for extreme standardisation needs to be discussed within industry. The idea of having extreme standardisation is extremely appealing but highly impractical. Pulls and counter-pulls within industry will make sure that there are always competing standards.

Vendors dont agree on Who decides on standardisation. There are multiple competing bodies even in standard setting space. Who decides what are the categories for standards? Should everything be in standard. That would imply commoditisation. What will vendors bring in as differentiation? There are no easy answers. If we ignore these questions then we will end up with surfeit of standards on top of existing mess.

As they said in vedic period "pinde pinde matirbhinna". Translated in English it means, every soul has different thinking. Implied in that is the notion that no thinking is superior or inferior, so who is to say which one should become standard? Each different specification mechanism or execution methodology is mostly in response to something desired by user community. So in some sense they represent user's aspirations (and vendor's motivations). How can these different motivations and aspirations be reconciled?

So what is the way out? Well there are ways of tackling these tricky questions.

For example instead of relying on extreme standardisation, industry bodies can work on interoperability standards. To do that, first thing as an industry we must agree on means to define categorisation and 'Domains of discourse'. We can then go on to develop interoperability standards for these various domains (intra domain as well as inter domain). Within IT there are various domains of discourse, for example if you consider IT change management category, the domains could be business strategy, IT strategy, portfolio planning, requirement & scoping, design & build, verification & validation, roll out & operational management, systems retirement etc. Similarly for different categorisation there will be different domains of discourse. Each of these domains will have many competing specification mechanisms and execution methodologies. What we need is means to interoperate between these specifications and methodologies for a domain of discourse and across domain of discourse.

With such interoperability standards in place the deliverable from one specification mechanism can be used to drive deliverables which use another specification mechanism. Similarly activities from one methodology can be used to trigger activities of other methodology. If we achieve this, as an industry, it will be a great leap forward. (MOF - meta object facility by OMG, is close enough example of an interoperability standard).

Or if we are lucky a dominant de facto standard may evolve in every domains of discourse. And if it is far superior to any competing ones, as had happened in case of RDBMS and SQL, then it can become a de jure standard.

So however appealing the idea of compliance to well defined standards, I am afraid we have to learn to manage myriad of different standards and help them co-exist.

Wednesday, August 22, 2007

SOA Fable: Do you wear a watch?

Most of us wear watches, and except for those who wear it as a piece of jewellary, we expect a service from this piece of equipment. And what is the unique service provided by this equipment? It lets us know what time it is! Wait, hang on. Aren't there enough service providers, providing this very service, around us? Well every car has a clock in it. every house has many clocks, every PC has a clock. Even every mobile phone, PDA, iPhone has a clock. Why do people wear watches, anyway?

What are your expectation as a consumer from a possible service telling you waht time it is?

Availability: Service must be available when consumer needs it.
If a consumer relies only on mobile phone and battery runs out. The service is not available when consumer may need it.

Reliability: Service response must be reliable and should not require double checking.
A consumer is not sure of the time shown by a clock at public place, he needs to double check it with another reliable source anyway.

Accessibility: Consumer can access the service whenever he needs it.
A clock in laptop in backpack is not easily accessible in a crowded commuter train.

Trust: Consumer trusts that service is backed up by accountability on part of a service provider. When it is a question of life and death, one can trust service provided by one's own piece of equipment to be accurate, reliable and available than a general purpose service. As a consumer one tend to trust no one but oneself as most accountable service provider.

This is a very important insight which helps one prepare a proper versioning policy.
Without that trust versioning schemes can be mis-used for creating specialized services.

In enterprises, since SOA is mandated, project owners will use services. But they will make sure that they get their own private version of a service. This is quite easy by getting a veto power over life cycle of a service and mis-using governance for this sake. Assume a service version 1 is in use. Second consumer wants to create another version, because it has additional needs. This version 2 is derived from version 1. Now first consumer wants an upgrade to his existing service. But he is not willing to accept service version 2 as base for its next version. He will find any execuse to make sure he gets a service version 1.1 rather than 3. This pattern will keep repeating. And soon there will be a lot of versions, changing only in second qualifier. So you will get, what I call parversion (parallel version) anti-pattern.


SOA governance must make sure it guards against this anti-pattern and creates appropriate policies and controls to minimize and eliminate its occurences. Morover governance must recognise that this is a symptom and root cause lies somewhere else.

Moral of the story: Consumer is willing to bend rules to get an acceptable, reliable, accessible and trustworthy service. Watch out for such rules violation and fix root cause.

Sunday, July 15, 2007

Blogged for an year

It has been an year since I started to blog regularly (well at least once a month). The blog has given me a platform to express my ideas. I expected a lot of interaction with fellow professionals. That expectation has not been met, mainly because of my inability to continue conversations. Carrying out conversations over blogs need a lot of commitment. Balancing work, personal commitments and finding time for this activity been a challenge for me. For that reason I am in awe of bloggers who blog daily and on variety of subjects.

But whatever limited interactions I have had, opened my eyes to new ways of looking at things. I am ever thankful to blogosphere for enriching my professional life. I hope to meet some of my on-line acquaintances, whose thoughts I found insightful, in person. I also hope someone, somewhere might have found my thoughts useful and feels the same about me.

Sunday, June 10, 2007

Not whether but when to REST

There is a debate starting to rage about whether REST or SOAP/XML-RPC is better choice for services. Following is my take on REST v/s SOAP/XML-RPC debate, in traditional enterprise computing scenarios.

From whatever I have read till now, my opinion is that REST is quite close to a distributed object oriented abstraction than a service oriented abstraction. Following table tries to bring out similarities between REST and OO abstraction.

RESTOO
In REST there are resources in OO you have Classes and Objects
In REST there are HTTP methods(PUT, DELETE, POST, GET ) for lifecycle managementin OO you have facilities provided by OO language (Constructor, Destructor, method invocation, accessor methods)
In REST resources keep application state informationin OO object represents the state
In REST type safety is provided by XML schemain OO type safety is provided by pre-shared class definitions (e.g. using .h files or java .class files)
In REST dynamic invocation is possible because of repository in OO dynamic invocation is possible because of reflection.


Of-course REST provides a more open distributed object oriented mechanism than say CORBA or EJB. It does it by usage of XML schema for marshalling/unmarshalling and open protocol like http (as against dcom, iiop or rmi).

But it is bound to face the some of the problems that distributed object oriented mechanisms faced. e.g. granularity of objects, scalability related issues, differing consumer experience based on lazy or early instantiation of resource graphs (object graphs in OO).

REST is an interesting way of implementing distributed object oriented mechanism, and there are times this abstraction is better suited than pure service oriented abstraction. So in my opinion debate should not be either REST or SOAP/XML-RPC, but when to use REST and when to use SOAP/XML-RPC. The limiting factor for time being is availability of tooling and skills. Over period of time these will develop and then, within enterprises both can co-exist.

Governance

Well I must share this experience on how effective governance control reduces to inefficient buerocratic controls.
This happened in a small airport, where I had gone to attend a conference.

While boarding the plane we were called in, by row numbers. Only hitch was, the plane being small, we were taken in a bus to the plane. So we were boarding the bus in sequence of our row numbers and not the plane. Once in the bus, people sat wherever they wanted to. Once they got out of the bus to board the plane, they were in random order. This defeated the whole purpose of boarding by row number.

Boarding plane in a order of row numbers makes it very efficient when passengers board the plane directly. But in case of boarding by bus, either the procedure needs to be changed to have desired effect or abandoned.

I think there is an important lesson in this, for people who design governance controls, for systems.

Monday, June 04, 2007

SOA - Necessary and sufficient

SOA is heralded as the 'must have' for business agility. I agree to a point. SOA is necessary but not sufficient to achieve the highest degree of business agility. Let me explain, why I think so.

In service oriented world, information systems try to be congruent with business world, providing information services in support of business services. The business organisations provide business services in order to carry out the business activities. These business services are steps within business activities and they use information services provided by underlying IT infrastructure.

However underlying IT infrastructure is not amenable to this business service oriented paradigm, fully. At implementation level, IT infrastructure has to deal with non-functional properties, such as responsiveness, scale, availability, latency, cost, skills availability etc. That imposes certain restrictions on implementations. E.g. For scale reason we normally separate behaviour and data. Behaviour (as represented in business logic and business rules) scales differently than data (and data stores - databases, file systems). That’s why in a typical distributed information system, there are more database servers than servers dedicated for executing business logic.

In service oriented world, information service provided by information systems need to mask such implementation issues. The idea that SOA will provide business agility will hold true, iff information services enable business services, use disparate information systems seamlessly. In SOA world, business services should lend themselves to rapid re-organisation and redeployment, in terms of business activity volumes, business responsiveness, speed of new product/service introduction etc.

The current thinking seems to be that a set of open standards, enabling integration between disparate information systems is all that is needed. With such integration mechanism, one can create a facade of a business service, using underlying disparate information systems. Hence the emphasis is on XML schemas, in-situ transformations, service choreography and to extent mediation [between required business service and provided information service(s)].

To me this is part of solution. It is the necessary condition but not sufficient.

As I had posted in past, one really does not know what should be granularity of information services. If you provide too granular information services, you would be better at reorganising but will be hard pressed to meet non-functional parameters. If you provide good enough services for current usage, satisfying non-functional parameters, you will have tough time reorganising. So for all practical purposes, for any business service change, there are possible information service related changes, rather than just reorganisation of information services.

That would mean, the agility of business service reorgnisation comes down to the change management in information systems. If you make pragmatic decisions in favour of speed of change, it leads to duplication and redundancy. If you try to keep your information systems pure and without redundancy, you sacrifice the speed of change.

So the key appears to be


  • getting your information services granularity just right for all possible reorganisation that would be needed by business. You cannot really know all possible business changes, but you can know up to a certain time horizon. So that you are just re-organising your information services rather than redeveloping.
  • if this is not possible or considered risky, you can take a re-factoring oriented approach. And incrementally build the service definitions.
  • and whenever you change information systems (because despite your best efforts business came up with a change that is not possible with current information service definition), use MDA or Software factories (or any other conceptual to implementation mapping technology) to effect the change from conceptual business services onto its IT implementation. This would bring down the time to make changes. And also would enable you to make pragmatic decisions, because even if there are duplications and redundancies at implementation level, the conceptual field is clean and pure.

That would be complete SOA for me.

Wednesday, May 30, 2007

Service Aversion to Service Orientation

Well I have a slightly different take on Service Averse Architecture. It is based on my experience with Banking Financial Services and Insurance (BFSI) industry and may not be generalised to other industry segments. The information technology (IT) was introduced in BFSI to improve operational efficiencies. If you look at the value chain, within BFSI, viz. manufacture-market-sell-service-retire a product or a service, IT was primarily required to take care of ‘service’ part. As long as IT expenditure was less than the operational efficiencies it provided, enterprises were happy, notwithstanding delays and budget overruns. Since IT was not commoditized then, whoever could cross the barrier to entry, benefited from IT (despite cost and time overruns of IT).

Interestingly enterprises within BFSI were always ‘Service oriented’ in their business. They did provide specific services to their stakeholders. The problem was always with the information systems they used to support these services they provided. There was a big mis-alignment between services that business provided and info systems they used to provide these services. These info systems were always monolithic and closed. It was these info systems, which distorted underlying service culture of business. And these ill-fitting information systems were result of what Todd would call ‘project culture’.

Interesting point is how business which itself operated services for its stakeholders, was taken over by this project culture and created ‘service averse architecture’ in information systems. It was mainly due to, aura and geeky culture associated with IT. The business leaders did not understand IT, but understood its importance. So they gave a free reign to IT leaders. Initial IT leaders did not have much understanding of underlying businesses, so they were in the mode, “Tell us exactly what you want done, and we will do it!” Unfortunately what business wanted done was always a small piece of big puzzle. Hence multiple monolithic closed information systems, handling parts of services that business was delivering to its stakeholders, were developed.

Now that IT is commoditized and barrier to entry is lowered considerably (well buying a mainframe used to be a momentous decision for CEO and now any IT related decisions are hardly made by CEOs), cost and time have become important. And, IT has penetrated the other aspects of value chain, notably market, sell and even manufacture (which uses business intelligence tools). So IT has become more important to business at the same time business has become less tolerant of IT’s pitfalls.

Also, over the years IT folks started understanding business in more details and they started asking “Why do you want it done this way?” rather than just following orders. It is, what my friend Ravi would call a shift from output oriented to outcome oriented mindset. So when business and IT finally started coming closer to each other, they started appreciating need for alignment between two. SOA in my opinion is vehicle for that. SOA helps IT recast itself, in business terms.

Most of the organisations out there have ‘Service Averse Architecture’ within their information systems. And the organisations that are doing transitions to SOA are the ones where the IT leaders have made that paradigm shift from output to outcome-oriented mindset. These are the leaders who understand importance of business and IT alignment and how SOA can help achieve that.

Unfortunately leaders buying into SOA vision is just part of the story. It would mean enterprise is willing to make transition to SOA, but whether it will be done successfully or not, depends on changing entire organisational culture from undue competition to more of trust and co-operation.

Wednesday, May 02, 2007

Commitment v/s involvement

A recent conversation with one of the project managers tasked with delivering services for enterprise (SDT -services delivery team), reminded me this old adage about commitment and involvement. The project manager said, "He is delivering to agreed project requirements and has put in a change control in place to handle the churn in requirement." So he believed SDT was totally involved with project for success of project and SOA initiative. The old adage goes thus, "In a breakfast of ham and egg, the chicken is involved but the pig is committed". What we need is not involvement but commitment (kind displayed by pig) from SDT.

The requirement for a project invariably change. The service delivery team, being a subcontractor ro project team, is going to be last team to know about the changed requirements and the first one expected to be delivering its part due to changed requirements. So the SDT is going to be the fall guy for all the problems the project team is going to face, because SDT will not be able to deliver after being placed in such a precarious position.

So what we need from SDT, is commitment to project than involvement with project. Knowing and anticipating changes to requirements, from services perspective will be a key challenge. The major reasons for requirements changes I have seen in past is becuase,

  • Wrong people giving requirements
  • Strategy not getting properly articulated down the line, leading to wrong requirements

The first part is addressable by project team by involving all stakeholders, appropriately. Its the later part which results in major problems. Most business projects are result of some strategic decisions from executive. The way startegy gets articulated from top executive down to people executing projects, is nothing better than 'chinese whispers'. Every link in the chain adds their own spin, and overloads elements of strategy with their own agenda. The SDT, being an enterprise wide intiative, should be able to rise above these spins. So SDT needs to be committed to project success by being on par with projects, rather than being happy with a subcontractor status. Thats the only way a SDT approach will work in a large enterprise. SDT should be treated as one of the startegic projects undertaken by business, than a mere service provided by IT for rest of the projects.
This will guarantee SDT's commitment (of pig variety).

Tuesday, April 24, 2007

AGILE Outsourced

AGILE is a space Scientific Mission devoted to gamma-ray astrophysics supported by the Italian Space Agency (ASI), with the scientific and programmatic co-participation of the Italian Institute of Astrophysics (INAF) and the Italian Institute of Nuclear Physics (INFN).

AGILE was launched by the Indian PSLV rocket from the Sriharikota base (Chennai-Madras). The launch was made, as planed, on 23th April 2007 at 12 00 a. m.
Every process was executed correctly.

Video is now available

Tuesday, April 10, 2007

Centralised service delivery team

Typically in business IT world, budgeting and portfolio definition phase determines what business benefits should be targetted for in a given period and what is the maximum cost that can be paid to achieve those benefits. Business projects are then executed to deliver business benefits while minimizing cost and time to get those benefits. So notwithstanding the various benefits of building reusable services for enterprise, projects tend to do point solutions. Mainly because of these budget and time considerations.

In response to this situation, one of the best practices that has evolved in SOA implementations is to have a central team implementing services for business projects. This has been a good idea. It can address many of the governance challenges. Mostly because this central service team (lets call it services delivery team - SDT) is managed and governed by principles set by enterprise architects for SOA, rather than budget and cost considerations alone.
However there are a couple of challenges in putting this to practice.

  • Having a right engagement between project team and this SDT is a big challenge. This engagement model breaks or makes the future of SOA, in the enterprise. It is vitally important to get this model right.
  • Getting the composition of this SDT right is another challenge, that must be addressed.

Initially the SDT was set up as an island, which owned the services and its interfaces. The SDT utilised existing capabilities or whenever necessary got the capabilities built. The engagement between project team and the SDT, was mainly of an outsourcer - outsourcee type. The project team defined its requirement and threw those over to SDT. Which then worked out the interfaces (keeping in reusability in mind). Then further outsourced building of capability required (if ncessary), to teams maintaining the systems which might provide the capabilities. The SDT had only application architects and high level designers apart from project managers. So the role of this SDT had become that of a systems integration designers and SOA standard tools had become SI tools.

This mode of working had turned into a waterfall life-cycle model, partly because the way engagement worked. This started to add a lot of overheads in terms of time and effort to project estimates. At the same time, benefits were not immediately realised for projects. As a result project team started resisting this engagement model, which in turn was viewed as rejection of SOA by project teams (which it was not). When project managers are measured by budget and schedule compliance alone, it was natural for them to resist this engagement model which took away the control of deciding how much budget and schedule risk they can afford to take. So SDT too are facing problems.

I think a new co-sourced engagement model needs trying out. In this model, services delivery team works as part of project team, governed by same budget and schedule consideration. But they are also governed by SOA principles too. So when these two governance principles contradicted, comporomises were required to be made. These compromises were inevitably are made in favour of project. Rarely in case of strategic services, the compromises are made in favour of services. These compromises, in their wake, will leave what I call 'non-'services. Some of these non-services need to be refactored into services by a part of service delivery team, which gets its funding straight from CIO. This team has limited funding, hence could only turn so many non-services into services, resulting into a back-log. Over a period of time, size of back-log would give a fair understading of the costs involved in making tactical compromises against strategic imperatives. This model needs to run for some time, for its efficacy to be known. For this model to work, it would need strong SOA governance framework and a strong testing facility in place.

Any sharing of experience in this area is most welcome.

Tuesday, February 13, 2007

Camping sites not slums!

Thanks Todd for an elaborate response to my earlier post. In the context of city planning analogy for SOA that we are discussing, we agree that

  1. Slums do get developed
  2. Slums are undesirable
  3. Slums need to be contained and eventually removed.
  4. SOA helps containment and removal of slums

Only point of disagreement is about effective governance. Todd argues that effective governance can make sure that slums don’t spring up in first place. Conversely if slums are springing up, governance is in-effective.

My contention is that such a governance mechanism may be difficult to establish in some cases. Todd has rightly pointed out that different governance models need to exist for organizations with different focus (viz. squeezing value v/s growth). However the organization I was working with was in both these modes simultaneously. It is an organization created out of many m&a in very short time frame. It is common (well this is my opinion and I have no data) for these organization to use one part of merged entity to grow another part of merged entity, while squeezing value out of first part. Moreover value creation (growth) is not always driven by normal value chain (viz. design >> build >> sell >> service). In this particular case a clever business person had figured out a financial engineering plan to release some (millions of dollars) value away from the normal value chain and IT support was required to hasten this plan.

In such cases it appears that it is really difficult to have governance mechanisms reconciling both these (growth and squeezing value) situations. What we had was a 'squeezing value' focused governance mechanism. So many IT asset were created (which were really growth focused) outside of normal (squeezing value focused) governance. There was a danger that these assets were then further utilized by normal (squeezing value focused) projects. So we had a slum and danger of creating an ecosystem that would be developed with slum as its center.

We could avert dependency on slum by some governance scheme. But then there were debates about not utilizing this slum that was already there. The projects not using the slums were seen as redeveloping those capabilities 'unnecessarily'. The slums creation got away without governance controls imposed on it, because it was part of 'growth' focused effort and evaded 'squeezing value focused' governance. But later projects not using the slum were caught in a debate because they were under 'squeezing value' focused governance.

The key questions then is how to have a reconciled governance catering for growth and squeezing value, which will then enable transition assets from 'growth' governance model to 'squeezing value' governance model?

My belief is that answer to this question will provide effective governance as suggested by Todd and SOA will be part of that answer. Then instead of developing slums we'll develop temporary camping sites which do provide some capabilities and are governed. These are transient capabilities waiting to be mainstreamed and made permanent, if required.

If it is not too late, I would love to hear Todd touch upon some of these issues in his upcoming webinar.

Monday, February 12, 2007

City planning and slum control

Since we are talking about Enterprise architecture as analogous to city planning, I thought I can bring in my developing world perspective about how slums develop in a city despite having a nice city planning guide, in order to bring out importance of having policies for dealing with slums as part of city planning guide.

In developing world, we are quite used to slums springing up in a city despite having a city planning guide. A slum is a microcosm of a city, but without any planning or vision. It is like a city within city, with its own ad-hoc rules and regulations. It does provide some useful services to rest of the city. No doubt it is sign of broken governance but it is not just that. Slums survive and thrive because they are needed for proper functioning of city and they do provide some value to the city. However slums are not desirable. City planners must contain and eventually get rid of them.

Similar situations arise in enterprise IT world, despite having a nicely laid out enterprise architecture, and strong governance. This article discusses some such cases of deviation, but lays the blame squarely on ineffectiveness of governance. While that may be true in some cases, sometimes circumstances force one to bypass guidelines and governance. So what we also need is a framework to rectify situation, post-facto. In city planning analogy terms, we need a proper slum control and removal policy.

To illustrate, let me give this example.

In a large enterprise, they have a proper enterprise architecture defined, as intended by Annie Shum's paper mentioned in this article. There are guidelines and governance mechanism. Based on this a multi year IT plan is drawn. One of the elements of the plan is to build subject area specific data related services. The build of these services are governed by principles laid out in enterprise architecture viz. Build should be iterative; TCO should be low, resultant services must be maintainable/flexible/performant etc. It is also tied to business benefits and this capability is sponsored by a business facing project which is planning to use it in a year’s time. Now this programme is midway and another business sponsor comes up with a proposal to build a slightly different set of services for same subject area (where attributes are very specific to the problem at hand, their granularity is not generic enough and their currency is very specific to problem). This new business sponsor has a solid business case; he is promising to add millions of dollars to bottom line, if this new set of services are made available within a short span of time. There is a very small window of opportunity from business perspective and it does not matter if underlying capability does not follow any of the enterprise architecture guidelines/principles or governance processes, as long as that window is utilized. The matter reaches highest level of decision making apparatus within enterprise (e.g. CEO) and you know who would win the argument. So this set of services are built which does not fit the plan laid out by enterprise architecture, it may not use technologies suggested by enterprise architecture (procuring them in time for this solution, is an issue), consequently it may not use solution architecture laid out by enterprise architecture guidelines.

In short this is a slum that is getting developed in this nice city (viz. enterprise IT) governed by city-planning guide (viz. Enterprise Architecture). And as an Enterprise Architect if you protest too much, I am sure you would be shown the door. You cannot argue with millions of dollars added to bottom line. So what should an Enterprise Architect do?

1. Do nothing. Which would mean more such slums spring up and then it is just question of time before governance and consequently enterprise architecture breaks down totally. So it does not appear a viable option.

2. Demolish the slums. This is possible if what was built, was one off capability and not required on an ongoing basis. So as soon as the utility of this capability diminishes below a certain threshold, get rid of it, completely. If required, by being very stern.

3. Rehabilitate the slums. This is the only option if what was built is not a one off capability but is required for enterprise on an ongoing basis. The reason it bypassed enterprise architecture guidelines and governance was to catch a business benefit specific window of opportunity. One cannot avoid such incidents. What we now need is to refactor this capability as per the enterprise architecture guidelines and governance processes. We need to bring in this capability in mainstream of the enterprise IT. In short we must rehabilitate the slums (if required by some amount of redevelopment).

There may be hybrid options based on specific situations, but an enterprise architecture plan as city planning guide, must provide for this eventuality as well. I have seen this type of issue cropping up more than once. So it is difficult to dismiss it as statistical outlier, not worthy of a strategic response.

Sunday, January 21, 2007

CBD, SOA, whats next?

Most engineering disciplines have managed to split their problems into one that of components assembly. Individual components are well defined and engineered to satisfy a very concise set of requirements. The system level problem then reduces to selecting proper components and assembling them to satisfy overall system's requirement.

Doing things this way has its benefits and it is well established by other branches of engineering. Software engineering is trying to do the same, but without much success. Atleast in business IT world. We have tried component based development (CBD) and now trying SOA. But we are not any closer to the pannacea of assembly of components.

What is that constraint, which is preventing software engineering to move to next level? Implicit in this question is an assumption that software engineering wants to move to a component assembly world. Well, I am not sure that is true either. And I am not alone in this thinking, see this post.

So the problem is not entirely technical. It is also of business motivation, and viable business models. In other branches of engineering, craft became engineering, when it promised to bring prosperity to all stakeholders. In software engineering, what is that business model, which will let multitudes of component makers flourish and big guns will just concentrate on component assembly? Is it standardisation effort, that is lacking? Is ultimate end IT user really interested in such component assembly world? Because in business IT world, the enterprise IT is main differentiator for an enterprise, which prevents it from being commoditized as a service or product provider.

These are the hard questions that industry must grapple with. Otherwise, every couple of years we will have, a wave, be it CBD or SOA and as an industry we'll not move much.

Each such wave (CBD, SOA) takes us further than today. But to make that transition to next level we need an industry wide push and not vendor specific attempts. And this must include all stakeholders - viz. business IT consumers, service providers and product vendors. Perhaps OMG can take lead?

Thursday, January 11, 2007

SOA: Reuse, of Interface or Capability?

SOA reusability debate is bit confused. For more clarity, we must understand the difference between service interface and capability behind it. A service exposes some interface which is backed by a capability. From a service consumer's perspective the reusability of interface is enough. Whereas from a builder's perspective the reusability of capability is more important. It is the later which is difficult (or impossible in some cases). It is possible that same interface can be mapped to multiple capabilities. For example an interface can be mapped to multiple capabilities based on Quality of Service (QoS) needs specified in interface. (Well, this is not possible right now in most implementations). The QoS needs like performance, scalability, availability, currency, data quality etc. would determine the binding of interface to capability. Sometimes even functional needs of services may force an interface to be bound to multiple capabilities developed over period of time. The problem is how does one control the proliferation of overlapping capabilities.

The reason why overlapping capabilities get developed are not straight forward. Sometimes it is weak governance, sometimes it is for QoS purposes, sometimes it is trade-off made to achieve results for a business facing project. This post by Todd Biske has elaborated about the last aspect. In practice one can address governance issues but capability duplication cannot be totally eliminated. The QoS needs will force multiple capabilities for same interface in some cases. There may be business drivers which will force a tactical trade-off to achieve larger business benefits. Well, one way to avoid trade-offs is to plan for the future, which I have suggested in this post. Still such overlapping capabilities will get built.

So what should one do? Well, one way out is to refactor and streamline the capabilities that are built. Here SOA would help, as consumers are not affected when capabilities change, as long as interface is same. So one should be refactoring and cleaning capabilities, because up-front reusable capabilties are hard to achieve in a working IT organisations. Business leaders must realize this reality and make funding available for these kinds of activities.

Enterprise IT Oracle

It is very surprising that so many competent and smart people come together in some Enetrprise IT organisation, then create a mess that is unmanageable and beyond their collective capabilities. It is not always a case of inaptitude or incompetence of individuals, but it is the nature of the beast. If we were to understand the problems of enterprise IT, we should try to analyze the root causes.

To me enterprise IT is driven by four major drivers,
1. Own business needs
2. Competative scenario
3. Regulatory compliance
4. Technology changes

These four drivers have different rate of change. Cost of not attending to those changes, is different too. Hence priority to attend to these changes is different. Enterprise IT is like an assembly line in action. And to make matter worse, it is an assembly line which can't be stopped for maintenance. So the four drivers combined with the fact that you never get any breathing space to fix enterprise IT, results in many tactical decisions which finally leads to mess of unmanageable proportions.

Ability of enterprise IT to respond to the drivers is constrained by capability of its own IT organisation, capability of its suppliers and capability of other stakeholders. Enterprise IT does not deal only with planning and design of IT but also about building, governing and sustaining too. This requires collective effort from IT organisation, suppliers and stakeholders, hence any mechanism to avoid the mess must have participation of all.

To avoid the mess EA must plan for future based on these four drivers and remember to make that plan as flexible as the rate of change within most important driver of the drivers listed above. When plan changes, it can accomodate important of the latest changes within other drivers too. The capability of IT orgnisation, supplier and stakeholders puts constraints on any plan that is created. This capability building also must be addressed within the plan itself. This planning is based on tracking of drivers and constraints. A sub-organisation within enterprise architecture community must own this. This organisation can then aptly be called Enterprise IT Oracle (An Oracle is a person or agency considered to be a source of wise counsel or prophetic opinion; an infallible authority. Not to be confused with Larry Ellison's Oracle corporation).

Wednesday, January 10, 2007

Web 2.0 and enterprise data management

This is a great post that every one who would like to understand web 2.0 in enterprise context must read. This article by Sam Lowe provides greater clarity on Web 2.0 and its implications for enterprise data. The interesting point that is made is more than technologies,the ideas behind web 2.0 is what is going to impact the enterprises' way of managing data. The participatory nature of Web 2.0 as applied to data management is a revelation and must form part of agenda that practitioners must shape and develop. To that effect Sam has also suggested to run a workshop. I feel its a great idea and interested people can congregate and dive more deeply into these issues.