The ways and means in software design, build and test has progressed in leaps and bounds, in recent years. But when it comes to requirements management the IT community is still stuck in stone age. The formal approaches have been tried and have not worked tremendously well in business IT scenario. In one of my earlier posts (Requirements management is crucial) I have tried to analyze why a formal approach may not work well with business IT systems.
That being said, even in a semi formal ways and means that we use for requirements elicitation and management, we can do much better. For example, in my current project one of my business analyst (BA) was complaining about how users are inept at giving requirements. I have heard that umpteen times till now. What we in IT fail to understand is that, business users and IT are two different cultures trying to speak with each other. Not only cultures are different, even languages are different.
My BA holds a workshop to elicit requirements and use cases are his means. But business users concerned, have never heard of the use case terminologies. They cant specify their requirements in a structured way the use case expects them to. It will help them tremendously if we tell them beforehand what we expect from them, in what format. May be a mock excercise or a working example can do the trick.
Another thing we fail to understand is that there are multiple types of requirements, and use case focus on requirement tend to combine them together. There are strategic and policy level requirements, which would be common across multiple use cases, whereas operational requirements will be specific to use cases. When we mix all these together in a use case, we are more often than not going to get requirement changes, because users empowered to specify these different requirements are different. When operational user speaks on behalf of policy makers, he is making a mistake. And we in IT are encouraging him by pressurising him to specify his requirements faster!
We need to have better mechanisms to handle this than plain use cases. I sincerely wish there be more tools in this space to help my BA and users alike. Till then I have to devise manual means to identify operational requirements and strategic requirements, and manage their life-cycles seperately. It will then be clearer with my BA, why is he getting so many requirement changes.
Saturday, December 30, 2006
Wednesday, December 27, 2006
Enterprise architecture in MDA terms
Recently one of my colleagues quipped about UML being nothing but pretty pictures. But at the same time he wanted to use MDA for EA. He pointed this documents as a good start.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
Thursday, December 14, 2006
Services are not ACID
ACID stands for atomicity, consistency, isolation and durability. Any transactional system needs to provide these properties. In file based system, it was the problem of designer and implementor(coder) to arrange for these properties. Then with advent of RDBMS these responsibilities were taken up by RDBMS. Again, in early days of distributed computing, the responsbility (of two phase commit) fell on designers/coders. Then came XA open architecture and introduced transaction co-ordinator in every RDBMS. Life became easier after that (from ACID perspective).
ACID is critical for transactional systems. Services in SOA do not guarantee ACID properties. They require compensatory services to achieve ACID effect. Those of you, who have worked with 'file' based information systems and then moved onto work with RDBMS, will understand my concern about services not being ACID, more. Because in a OLTP world, a transaction followed with some delay by a compensatory transaction, is not exactly equivalent to a ACID transaction. This may not affect in most cases, but when it does, the effect can be pretty nasty. Especially when a service executes an automated protocol in one service call and then try to undo its effects using a compensatory service. This kind of thing happens in a straight thru processing scenario. This remains one of my major concerns.
ACID is critical for transactional systems. Services in SOA do not guarantee ACID properties. They require compensatory services to achieve ACID effect. Those of you, who have worked with 'file' based information systems and then moved onto work with RDBMS, will understand my concern about services not being ACID, more. Because in a OLTP world, a transaction followed with some delay by a compensatory transaction, is not exactly equivalent to a ACID transaction. This may not affect in most cases, but when it does, the effect can be pretty nasty. Especially when a service executes an automated protocol in one service call and then try to undo its effects using a compensatory service. This kind of thing happens in a straight thru processing scenario. This remains one of my major concerns.
SOA, funding and hidden gems
There are some good posts on services and their characteristics. The main thought of these posts, is to bring out characteristics of an enterprise service. There is a comparison between city planning and SOA. A thought encourages different granularity of services to co-exists with an anlogy to niche retailers v/s Walmart.
However the way projects are funded in enterprise IT will hinder this approach. Enterprise IT, as far as funding goes, is governed more like a communist polit-buro than a capitalist entreuprunial system. There are no venture-capitalists ready to fund (seemingly) wild-vackey ideas. They would rather go with proven ways of doing things. So the idea of having niche services co-existing with run-of-the-mill enterprise services is a non-starter. That does not mean it will not happen.
Traditionally departments moon-light on their budgets and create fundings for these niche capabilities (albeit not in services form), but then those capabilities remain outside perview of enterprise architecture and remain in back-alleys, hidden in enterprise hiddenware. Which also prevents proper utilisation of these capabilities. Same thing can happen in an SOA. Departments will create niche services, and somehow fund them. But these services will be below the radar for rest of the enterprise.
An enterprise architect, has to live with this fact of life and provide means to unearth such hidden gems and bring them back to EA fold for governance. As mentioned in the posts mentioned above, a collection of such niche services may be a viable alternative to a coarse grained enterprise service, only if we know such niche services exist.
Solving the funding issue, to borrow a term from Ms. Madeline Albright,is beyond payscales of most enterprise architects and best left to business leaders to figure out!
However the way projects are funded in enterprise IT will hinder this approach. Enterprise IT, as far as funding goes, is governed more like a communist polit-buro than a capitalist entreuprunial system. There are no venture-capitalists ready to fund (seemingly) wild-vackey ideas. They would rather go with proven ways of doing things. So the idea of having niche services co-existing with run-of-the-mill enterprise services is a non-starter. That does not mean it will not happen.
Traditionally departments moon-light on their budgets and create fundings for these niche capabilities (albeit not in services form), but then those capabilities remain outside perview of enterprise architecture and remain in back-alleys, hidden in enterprise hiddenware. Which also prevents proper utilisation of these capabilities. Same thing can happen in an SOA. Departments will create niche services, and somehow fund them. But these services will be below the radar for rest of the enterprise.
An enterprise architect, has to live with this fact of life and provide means to unearth such hidden gems and bring them back to EA fold for governance. As mentioned in the posts mentioned above, a collection of such niche services may be a viable alternative to a coarse grained enterprise service, only if we know such niche services exist.
Solving the funding issue, to borrow a term from Ms. Madeline Albright,is beyond payscales of most enterprise architects and best left to business leaders to figure out!
Monday, December 04, 2006
SOA and demonstrating ROI
Typically large enterprises in BFSI space have a conservative cost benefit accounting practices, focused on accountability. Within these enterprises IT departments are often flogged for not achieving ROI. This also makes it a major area of concern for an Enterprise architect.
In 2007 trend analysis by IT experts, one of the major trends mentioned is about difficulty in demonstrating ROI. Traditionally demonstrating ROI was a problem in IT shops. With stovepipe architecture, at least you could attach your costs to relevant information systems and could figure out benefits accrued because of that information system. Whether it made sense or not, to have particular information system was obvious. Of course problems related to intangible benefits accounting was always there.
The situation became difficult with sharing of infrastructure be it client server systems and lately with EAI. However with SOA gaining traction the issue of demonstrating ROI is becoming even more difficult to handle. There are several issues, ranging from, when a project considered delivered, how costs are measured, apportioned and paid for, to what constitutes benefit and how to measure it. Most of the time IT folks are on defensive, in these debates.
There are problem with measuring both, the fixed costs incurred while building these systems as well as running costs incurred while running these systems. On fixed cost front, the problem arises mainly because of the reuse of infrastructure and non-functional services in a seamless manner. Please note this is different than any sharing of infrastructure you may have seen earlier. Earlier even with sharing, there were clear cut boundaries between users. Now with SOA, the boundaries are blurred, as far as infrastructure and non-functional services goes. One does not know how many future projects are going to use these services (with or without refactoring). Hence they have no clue how to apportion the fixed costs to users. This sometimes turns projects away from using the common infrastructure, as the up front costs are deemed too high. If yours is the first project you rue the fact that you have to build the entire infrastructure and own up all the cost. The enterprise needs to have a clear policy for these types of scenarios so that projects know how cost is going to be charged.
On running costs front various cost elements, e.g. ESB, network connections, underlying applications etc. need to be metered at the granularity which makes sense for the accounting purposes and which helps tagging metered data with a user. Here user is meant from a cost benefit accounting perspective and not actual business user. This metering is not easy. Most of the times underlying IT elements are not amenable to be metered at the level of granularity which helps you connect the metered data with users. Sometimes, metering at such granularity adds an un-acceptable overhead so as to breach non-functional requirements. I have not seen any satisfactory solution to this problem. People have used various sampling and data projection techniques. These are unfair in some scenarios and costs get skewed in favour or against some information systems. The applications part of this run-time metering is relatively easy but it still has problems of adding overheads and breaching non-functional requirements. So people use sampling and projection techniques, here too. But luckily there is not much seamless reuse of application services hence these sampling does not skew costs drastically.
As for benefits the debate is even more intense. The debate begins with the definition of when the project is deemed complete and starts accumulating benefits. e.g. In a business process automation project using SOA, with iterative delivery, one may automate some part of business process which results into some benefit, but end to end business process may actually suffer during this transition, because it may have to carry on with manual work-around. So how does one measure benefits in this case? With intangibles, there is a perennial problem of how does one measure intangible benefits? Sometimes even measuring tangible benefits is difficult, because the data for comparison is unavailable. As with costs, to measure benefits one needs metering at various levels for measuring elements of benefits (e.g. time, resource usage etc.). All the metering issues faced by cost measurement are also faced by benefits measurement as well.
So the key problem is working out semantics of costs and benefits with the business folks, putting a programme in place for their measurement in conjunction with SOA. If SOA is combined with a measurement programme, then it may be possible to demonstrate the ROI with these agreed definitions. This measurement programme is peculiar in the sense that for deployment it can be clubbed with SOA but it has its own separate governance needs, apart from SOA. So it needs to be handled appropriately. This is more than BAM and covers even IT aspects too. May be we need a new acronym, how does BITAM (Business and IT activity monitoring) sound?
In 2007 trend analysis by IT experts, one of the major trends mentioned is about difficulty in demonstrating ROI. Traditionally demonstrating ROI was a problem in IT shops. With stovepipe architecture, at least you could attach your costs to relevant information systems and could figure out benefits accrued because of that information system. Whether it made sense or not, to have particular information system was obvious. Of course problems related to intangible benefits accounting was always there.
The situation became difficult with sharing of infrastructure be it client server systems and lately with EAI. However with SOA gaining traction the issue of demonstrating ROI is becoming even more difficult to handle. There are several issues, ranging from, when a project considered delivered, how costs are measured, apportioned and paid for, to what constitutes benefit and how to measure it. Most of the time IT folks are on defensive, in these debates.
There are problem with measuring both, the fixed costs incurred while building these systems as well as running costs incurred while running these systems. On fixed cost front, the problem arises mainly because of the reuse of infrastructure and non-functional services in a seamless manner. Please note this is different than any sharing of infrastructure you may have seen earlier. Earlier even with sharing, there were clear cut boundaries between users. Now with SOA, the boundaries are blurred, as far as infrastructure and non-functional services goes. One does not know how many future projects are going to use these services (with or without refactoring). Hence they have no clue how to apportion the fixed costs to users. This sometimes turns projects away from using the common infrastructure, as the up front costs are deemed too high. If yours is the first project you rue the fact that you have to build the entire infrastructure and own up all the cost. The enterprise needs to have a clear policy for these types of scenarios so that projects know how cost is going to be charged.
On running costs front various cost elements, e.g. ESB, network connections, underlying applications etc. need to be metered at the granularity which makes sense for the accounting purposes and which helps tagging metered data with a user. Here user is meant from a cost benefit accounting perspective and not actual business user. This metering is not easy. Most of the times underlying IT elements are not amenable to be metered at the level of granularity which helps you connect the metered data with users. Sometimes, metering at such granularity adds an un-acceptable overhead so as to breach non-functional requirements. I have not seen any satisfactory solution to this problem. People have used various sampling and data projection techniques. These are unfair in some scenarios and costs get skewed in favour or against some information systems. The applications part of this run-time metering is relatively easy but it still has problems of adding overheads and breaching non-functional requirements. So people use sampling and projection techniques, here too. But luckily there is not much seamless reuse of application services hence these sampling does not skew costs drastically.
As for benefits the debate is even more intense. The debate begins with the definition of when the project is deemed complete and starts accumulating benefits. e.g. In a business process automation project using SOA, with iterative delivery, one may automate some part of business process which results into some benefit, but end to end business process may actually suffer during this transition, because it may have to carry on with manual work-around. So how does one measure benefits in this case? With intangibles, there is a perennial problem of how does one measure intangible benefits? Sometimes even measuring tangible benefits is difficult, because the data for comparison is unavailable. As with costs, to measure benefits one needs metering at various levels for measuring elements of benefits (e.g. time, resource usage etc.). All the metering issues faced by cost measurement are also faced by benefits measurement as well.
So the key problem is working out semantics of costs and benefits with the business folks, putting a programme in place for their measurement in conjunction with SOA. If SOA is combined with a measurement programme, then it may be possible to demonstrate the ROI with these agreed definitions. This measurement programme is peculiar in the sense that for deployment it can be clubbed with SOA but it has its own separate governance needs, apart from SOA. So it needs to be handled appropriately. This is more than BAM and covers even IT aspects too. May be we need a new acronym, how does BITAM (Business and IT activity monitoring) sound?
Saturday, December 02, 2006
Agile methods to kill outsourcing? I dont think so.
I hear an industry thought leader vigorously promoting Agile Methods. Nothing wrong with that. Only problem is there is an underlying thought which I dont agree with. The thought being Agile Methods can kill outsourcing. I would like to provide a logical argument to support my opposition to this thought.
For this we need to understand the philosophies behind getting a job done. The two prominent schools of thought are Adam Smith's one and Hammer-Champy one. Adam Smith had proposed a division of labour centric approach. To get any job done, the roles were clearly segregated. Each role needed to have specific skills. The other inputs apart from the skill that particular role needed, were provided by roles above it. The system was made up of a lot of specialists. Each specialist made only those decisions that were confined to its role and sought the decisions beyond their own role to be made outside, irrespective of its effect on the job at hand. (any government office anywhere in the world,is a good example of this approach, so are some public sector banks in India).
It worked fine for lot of years. But like any system, it developed abberations. The big bureaucrocy that these division of labour created started having bad effects on working of systems. Thats when Hammer-Champy came up with their revolutionary ideas on business process re-engineering. They proposed systems with generalists instead of specialists. These generalists were to be supported by enabling tools, such as improved IT systems, better collaboration tools such as fax, telephone, e-mails etc. They made multiple decisions, irresepctive of specilaization they had. They used enabling tools to make those decisions and in rare cases when they could not make those decisions, the job was transferred to actual specialist. This mode of getting job done has caught on and examples are everywhere to see. (Any private bank in India or anywhere in the world is an example of this, where bank teller supports all the functions from opening bank account to withdrawing cash).
This approach works, because it is only rarely one needs services of true specialist, as compared to, a generalist supported by enabling tools. One can see parallel between these and the way software development itself is evolving. The traditional SDLC with BDUF (big design upfront) follows a smithian approach. It expects a lot of specialists to collaborate to develop software. It needs a very heavy process support. That when you have ISO/CMM coming in.
Whereas Agile method appear to follow a Hammer-Champy approach to software development, with a slight variation. It relies on a multi-role specialist, instead of generalist. These multi-role specialists perform multiple roles themsalves. They are specialists in these multiple roles (either because of their training or experience or both), hence they dont need either a big process support or support from a lot of other specialists. The people who think this can kill outsourcing appear to base that conclusion on following logic. Since multi-role specilaists are in short supply and difficult to create, the outsourcers cannot have enough of them. Hence outsourcing will stop. Thats what the logic appears to be.
But as I had discussed in one of my previous posts about innovation shown by outsourcers, this one too can be handled innovatively. One can always replace the multi-role specialist with a generalist supported by enabling tools and achieve same result, as originally envisaged by Hammer-champy. One cannot beat support systems provided for such a genralist by large outsourcing organisations. The large outsourcing organisations have benefits of sharing humungous amount of knowledge, which even multi-role specialists dont have access to. So agile methods should not be viewed as just an antidote for outsourcing.
As outlined in one of my earlier posts, both these approaches (viz. agile and traditionla SDLC) are valid and are valuable in different contexts. The IT leaders need to choose appropriate methods based on their needs. As an Enterprise Architect, its my worry to provide appropriate governance controls in an uniform framework which works for both these approaches. It is of vital importance that I put these governance controls in place so that the roles make only those decisions they are empowered to make. Because it is very easy to confuse role boundaries in these two drastically different approaches.
For this we need to understand the philosophies behind getting a job done. The two prominent schools of thought are Adam Smith's one and Hammer-Champy one. Adam Smith had proposed a division of labour centric approach. To get any job done, the roles were clearly segregated. Each role needed to have specific skills. The other inputs apart from the skill that particular role needed, were provided by roles above it. The system was made up of a lot of specialists. Each specialist made only those decisions that were confined to its role and sought the decisions beyond their own role to be made outside, irrespective of its effect on the job at hand. (any government office anywhere in the world,is a good example of this approach, so are some public sector banks in India).
It worked fine for lot of years. But like any system, it developed abberations. The big bureaucrocy that these division of labour created started having bad effects on working of systems. Thats when Hammer-Champy came up with their revolutionary ideas on business process re-engineering. They proposed systems with generalists instead of specialists. These generalists were to be supported by enabling tools, such as improved IT systems, better collaboration tools such as fax, telephone, e-mails etc. They made multiple decisions, irresepctive of specilaization they had. They used enabling tools to make those decisions and in rare cases when they could not make those decisions, the job was transferred to actual specialist. This mode of getting job done has caught on and examples are everywhere to see. (Any private bank in India or anywhere in the world is an example of this, where bank teller supports all the functions from opening bank account to withdrawing cash).
This approach works, because it is only rarely one needs services of true specialist, as compared to, a generalist supported by enabling tools. One can see parallel between these and the way software development itself is evolving. The traditional SDLC with BDUF (big design upfront) follows a smithian approach. It expects a lot of specialists to collaborate to develop software. It needs a very heavy process support. That when you have ISO/CMM coming in.
Whereas Agile method appear to follow a Hammer-Champy approach to software development, with a slight variation. It relies on a multi-role specialist, instead of generalist. These multi-role specialists perform multiple roles themsalves. They are specialists in these multiple roles (either because of their training or experience or both), hence they dont need either a big process support or support from a lot of other specialists. The people who think this can kill outsourcing appear to base that conclusion on following logic. Since multi-role specilaists are in short supply and difficult to create, the outsourcers cannot have enough of them. Hence outsourcing will stop. Thats what the logic appears to be.
But as I had discussed in one of my previous posts about innovation shown by outsourcers, this one too can be handled innovatively. One can always replace the multi-role specialist with a generalist supported by enabling tools and achieve same result, as originally envisaged by Hammer-champy. One cannot beat support systems provided for such a genralist by large outsourcing organisations. The large outsourcing organisations have benefits of sharing humungous amount of knowledge, which even multi-role specialists dont have access to. So agile methods should not be viewed as just an antidote for outsourcing.
As outlined in one of my earlier posts, both these approaches (viz. agile and traditionla SDLC) are valid and are valuable in different contexts. The IT leaders need to choose appropriate methods based on their needs. As an Enterprise Architect, its my worry to provide appropriate governance controls in an uniform framework which works for both these approaches. It is of vital importance that I put these governance controls in place so that the roles make only those decisions they are empowered to make. Because it is very easy to confuse role boundaries in these two drastically different approaches.
Subscribe to:
Posts (Atom)
Saturday, December 30, 2006
Requirements elicitation
The ways and means in software design, build and test has progressed in leaps and bounds, in recent years. But when it comes to requirements management the IT community is still stuck in stone age. The formal approaches have been tried and have not worked tremendously well in business IT scenario. In one of my earlier posts (Requirements management is crucial) I have tried to analyze why a formal approach may not work well with business IT systems.
That being said, even in a semi formal ways and means that we use for requirements elicitation and management, we can do much better. For example, in my current project one of my business analyst (BA) was complaining about how users are inept at giving requirements. I have heard that umpteen times till now. What we in IT fail to understand is that, business users and IT are two different cultures trying to speak with each other. Not only cultures are different, even languages are different.
My BA holds a workshop to elicit requirements and use cases are his means. But business users concerned, have never heard of the use case terminologies. They cant specify their requirements in a structured way the use case expects them to. It will help them tremendously if we tell them beforehand what we expect from them, in what format. May be a mock excercise or a working example can do the trick.
Another thing we fail to understand is that there are multiple types of requirements, and use case focus on requirement tend to combine them together. There are strategic and policy level requirements, which would be common across multiple use cases, whereas operational requirements will be specific to use cases. When we mix all these together in a use case, we are more often than not going to get requirement changes, because users empowered to specify these different requirements are different. When operational user speaks on behalf of policy makers, he is making a mistake. And we in IT are encouraging him by pressurising him to specify his requirements faster!
We need to have better mechanisms to handle this than plain use cases. I sincerely wish there be more tools in this space to help my BA and users alike. Till then I have to devise manual means to identify operational requirements and strategic requirements, and manage their life-cycles seperately. It will then be clearer with my BA, why is he getting so many requirement changes.
That being said, even in a semi formal ways and means that we use for requirements elicitation and management, we can do much better. For example, in my current project one of my business analyst (BA) was complaining about how users are inept at giving requirements. I have heard that umpteen times till now. What we in IT fail to understand is that, business users and IT are two different cultures trying to speak with each other. Not only cultures are different, even languages are different.
My BA holds a workshop to elicit requirements and use cases are his means. But business users concerned, have never heard of the use case terminologies. They cant specify their requirements in a structured way the use case expects them to. It will help them tremendously if we tell them beforehand what we expect from them, in what format. May be a mock excercise or a working example can do the trick.
Another thing we fail to understand is that there are multiple types of requirements, and use case focus on requirement tend to combine them together. There are strategic and policy level requirements, which would be common across multiple use cases, whereas operational requirements will be specific to use cases. When we mix all these together in a use case, we are more often than not going to get requirement changes, because users empowered to specify these different requirements are different. When operational user speaks on behalf of policy makers, he is making a mistake. And we in IT are encouraging him by pressurising him to specify his requirements faster!
We need to have better mechanisms to handle this than plain use cases. I sincerely wish there be more tools in this space to help my BA and users alike. Till then I have to devise manual means to identify operational requirements and strategic requirements, and manage their life-cycles seperately. It will then be clearer with my BA, why is he getting so many requirement changes.
Wednesday, December 27, 2006
Enterprise architecture in MDA terms
Recently one of my colleagues quipped about UML being nothing but pretty pictures. But at the same time he wanted to use MDA for EA. He pointed this documents as a good start.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
Thursday, December 14, 2006
Services are not ACID
ACID stands for atomicity, consistency, isolation and durability. Any transactional system needs to provide these properties. In file based system, it was the problem of designer and implementor(coder) to arrange for these properties. Then with advent of RDBMS these responsibilities were taken up by RDBMS. Again, in early days of distributed computing, the responsbility (of two phase commit) fell on designers/coders. Then came XA open architecture and introduced transaction co-ordinator in every RDBMS. Life became easier after that (from ACID perspective).
ACID is critical for transactional systems. Services in SOA do not guarantee ACID properties. They require compensatory services to achieve ACID effect. Those of you, who have worked with 'file' based information systems and then moved onto work with RDBMS, will understand my concern about services not being ACID, more. Because in a OLTP world, a transaction followed with some delay by a compensatory transaction, is not exactly equivalent to a ACID transaction. This may not affect in most cases, but when it does, the effect can be pretty nasty. Especially when a service executes an automated protocol in one service call and then try to undo its effects using a compensatory service. This kind of thing happens in a straight thru processing scenario. This remains one of my major concerns.
ACID is critical for transactional systems. Services in SOA do not guarantee ACID properties. They require compensatory services to achieve ACID effect. Those of you, who have worked with 'file' based information systems and then moved onto work with RDBMS, will understand my concern about services not being ACID, more. Because in a OLTP world, a transaction followed with some delay by a compensatory transaction, is not exactly equivalent to a ACID transaction. This may not affect in most cases, but when it does, the effect can be pretty nasty. Especially when a service executes an automated protocol in one service call and then try to undo its effects using a compensatory service. This kind of thing happens in a straight thru processing scenario. This remains one of my major concerns.
SOA, funding and hidden gems
There are some good posts on services and their characteristics. The main thought of these posts, is to bring out characteristics of an enterprise service. There is a comparison between city planning and SOA. A thought encourages different granularity of services to co-exists with an anlogy to niche retailers v/s Walmart.
However the way projects are funded in enterprise IT will hinder this approach. Enterprise IT, as far as funding goes, is governed more like a communist polit-buro than a capitalist entreuprunial system. There are no venture-capitalists ready to fund (seemingly) wild-vackey ideas. They would rather go with proven ways of doing things. So the idea of having niche services co-existing with run-of-the-mill enterprise services is a non-starter. That does not mean it will not happen.
Traditionally departments moon-light on their budgets and create fundings for these niche capabilities (albeit not in services form), but then those capabilities remain outside perview of enterprise architecture and remain in back-alleys, hidden in enterprise hiddenware. Which also prevents proper utilisation of these capabilities. Same thing can happen in an SOA. Departments will create niche services, and somehow fund them. But these services will be below the radar for rest of the enterprise.
An enterprise architect, has to live with this fact of life and provide means to unearth such hidden gems and bring them back to EA fold for governance. As mentioned in the posts mentioned above, a collection of such niche services may be a viable alternative to a coarse grained enterprise service, only if we know such niche services exist.
Solving the funding issue, to borrow a term from Ms. Madeline Albright,is beyond payscales of most enterprise architects and best left to business leaders to figure out!
However the way projects are funded in enterprise IT will hinder this approach. Enterprise IT, as far as funding goes, is governed more like a communist polit-buro than a capitalist entreuprunial system. There are no venture-capitalists ready to fund (seemingly) wild-vackey ideas. They would rather go with proven ways of doing things. So the idea of having niche services co-existing with run-of-the-mill enterprise services is a non-starter. That does not mean it will not happen.
Traditionally departments moon-light on their budgets and create fundings for these niche capabilities (albeit not in services form), but then those capabilities remain outside perview of enterprise architecture and remain in back-alleys, hidden in enterprise hiddenware. Which also prevents proper utilisation of these capabilities. Same thing can happen in an SOA. Departments will create niche services, and somehow fund them. But these services will be below the radar for rest of the enterprise.
An enterprise architect, has to live with this fact of life and provide means to unearth such hidden gems and bring them back to EA fold for governance. As mentioned in the posts mentioned above, a collection of such niche services may be a viable alternative to a coarse grained enterprise service, only if we know such niche services exist.
Solving the funding issue, to borrow a term from Ms. Madeline Albright,is beyond payscales of most enterprise architects and best left to business leaders to figure out!
Monday, December 04, 2006
SOA and demonstrating ROI
Typically large enterprises in BFSI space have a conservative cost benefit accounting practices, focused on accountability. Within these enterprises IT departments are often flogged for not achieving ROI. This also makes it a major area of concern for an Enterprise architect.
In 2007 trend analysis by IT experts, one of the major trends mentioned is about difficulty in demonstrating ROI. Traditionally demonstrating ROI was a problem in IT shops. With stovepipe architecture, at least you could attach your costs to relevant information systems and could figure out benefits accrued because of that information system. Whether it made sense or not, to have particular information system was obvious. Of course problems related to intangible benefits accounting was always there.
The situation became difficult with sharing of infrastructure be it client server systems and lately with EAI. However with SOA gaining traction the issue of demonstrating ROI is becoming even more difficult to handle. There are several issues, ranging from, when a project considered delivered, how costs are measured, apportioned and paid for, to what constitutes benefit and how to measure it. Most of the time IT folks are on defensive, in these debates.
There are problem with measuring both, the fixed costs incurred while building these systems as well as running costs incurred while running these systems. On fixed cost front, the problem arises mainly because of the reuse of infrastructure and non-functional services in a seamless manner. Please note this is different than any sharing of infrastructure you may have seen earlier. Earlier even with sharing, there were clear cut boundaries between users. Now with SOA, the boundaries are blurred, as far as infrastructure and non-functional services goes. One does not know how many future projects are going to use these services (with or without refactoring). Hence they have no clue how to apportion the fixed costs to users. This sometimes turns projects away from using the common infrastructure, as the up front costs are deemed too high. If yours is the first project you rue the fact that you have to build the entire infrastructure and own up all the cost. The enterprise needs to have a clear policy for these types of scenarios so that projects know how cost is going to be charged.
On running costs front various cost elements, e.g. ESB, network connections, underlying applications etc. need to be metered at the granularity which makes sense for the accounting purposes and which helps tagging metered data with a user. Here user is meant from a cost benefit accounting perspective and not actual business user. This metering is not easy. Most of the times underlying IT elements are not amenable to be metered at the level of granularity which helps you connect the metered data with users. Sometimes, metering at such granularity adds an un-acceptable overhead so as to breach non-functional requirements. I have not seen any satisfactory solution to this problem. People have used various sampling and data projection techniques. These are unfair in some scenarios and costs get skewed in favour or against some information systems. The applications part of this run-time metering is relatively easy but it still has problems of adding overheads and breaching non-functional requirements. So people use sampling and projection techniques, here too. But luckily there is not much seamless reuse of application services hence these sampling does not skew costs drastically.
As for benefits the debate is even more intense. The debate begins with the definition of when the project is deemed complete and starts accumulating benefits. e.g. In a business process automation project using SOA, with iterative delivery, one may automate some part of business process which results into some benefit, but end to end business process may actually suffer during this transition, because it may have to carry on with manual work-around. So how does one measure benefits in this case? With intangibles, there is a perennial problem of how does one measure intangible benefits? Sometimes even measuring tangible benefits is difficult, because the data for comparison is unavailable. As with costs, to measure benefits one needs metering at various levels for measuring elements of benefits (e.g. time, resource usage etc.). All the metering issues faced by cost measurement are also faced by benefits measurement as well.
So the key problem is working out semantics of costs and benefits with the business folks, putting a programme in place for their measurement in conjunction with SOA. If SOA is combined with a measurement programme, then it may be possible to demonstrate the ROI with these agreed definitions. This measurement programme is peculiar in the sense that for deployment it can be clubbed with SOA but it has its own separate governance needs, apart from SOA. So it needs to be handled appropriately. This is more than BAM and covers even IT aspects too. May be we need a new acronym, how does BITAM (Business and IT activity monitoring) sound?
In 2007 trend analysis by IT experts, one of the major trends mentioned is about difficulty in demonstrating ROI. Traditionally demonstrating ROI was a problem in IT shops. With stovepipe architecture, at least you could attach your costs to relevant information systems and could figure out benefits accrued because of that information system. Whether it made sense or not, to have particular information system was obvious. Of course problems related to intangible benefits accounting was always there.
The situation became difficult with sharing of infrastructure be it client server systems and lately with EAI. However with SOA gaining traction the issue of demonstrating ROI is becoming even more difficult to handle. There are several issues, ranging from, when a project considered delivered, how costs are measured, apportioned and paid for, to what constitutes benefit and how to measure it. Most of the time IT folks are on defensive, in these debates.
There are problem with measuring both, the fixed costs incurred while building these systems as well as running costs incurred while running these systems. On fixed cost front, the problem arises mainly because of the reuse of infrastructure and non-functional services in a seamless manner. Please note this is different than any sharing of infrastructure you may have seen earlier. Earlier even with sharing, there were clear cut boundaries between users. Now with SOA, the boundaries are blurred, as far as infrastructure and non-functional services goes. One does not know how many future projects are going to use these services (with or without refactoring). Hence they have no clue how to apportion the fixed costs to users. This sometimes turns projects away from using the common infrastructure, as the up front costs are deemed too high. If yours is the first project you rue the fact that you have to build the entire infrastructure and own up all the cost. The enterprise needs to have a clear policy for these types of scenarios so that projects know how cost is going to be charged.
On running costs front various cost elements, e.g. ESB, network connections, underlying applications etc. need to be metered at the granularity which makes sense for the accounting purposes and which helps tagging metered data with a user. Here user is meant from a cost benefit accounting perspective and not actual business user. This metering is not easy. Most of the times underlying IT elements are not amenable to be metered at the level of granularity which helps you connect the metered data with users. Sometimes, metering at such granularity adds an un-acceptable overhead so as to breach non-functional requirements. I have not seen any satisfactory solution to this problem. People have used various sampling and data projection techniques. These are unfair in some scenarios and costs get skewed in favour or against some information systems. The applications part of this run-time metering is relatively easy but it still has problems of adding overheads and breaching non-functional requirements. So people use sampling and projection techniques, here too. But luckily there is not much seamless reuse of application services hence these sampling does not skew costs drastically.
As for benefits the debate is even more intense. The debate begins with the definition of when the project is deemed complete and starts accumulating benefits. e.g. In a business process automation project using SOA, with iterative delivery, one may automate some part of business process which results into some benefit, but end to end business process may actually suffer during this transition, because it may have to carry on with manual work-around. So how does one measure benefits in this case? With intangibles, there is a perennial problem of how does one measure intangible benefits? Sometimes even measuring tangible benefits is difficult, because the data for comparison is unavailable. As with costs, to measure benefits one needs metering at various levels for measuring elements of benefits (e.g. time, resource usage etc.). All the metering issues faced by cost measurement are also faced by benefits measurement as well.
So the key problem is working out semantics of costs and benefits with the business folks, putting a programme in place for their measurement in conjunction with SOA. If SOA is combined with a measurement programme, then it may be possible to demonstrate the ROI with these agreed definitions. This measurement programme is peculiar in the sense that for deployment it can be clubbed with SOA but it has its own separate governance needs, apart from SOA. So it needs to be handled appropriately. This is more than BAM and covers even IT aspects too. May be we need a new acronym, how does BITAM (Business and IT activity monitoring) sound?
Saturday, December 02, 2006
Agile methods to kill outsourcing? I dont think so.
I hear an industry thought leader vigorously promoting Agile Methods. Nothing wrong with that. Only problem is there is an underlying thought which I dont agree with. The thought being Agile Methods can kill outsourcing. I would like to provide a logical argument to support my opposition to this thought.
For this we need to understand the philosophies behind getting a job done. The two prominent schools of thought are Adam Smith's one and Hammer-Champy one. Adam Smith had proposed a division of labour centric approach. To get any job done, the roles were clearly segregated. Each role needed to have specific skills. The other inputs apart from the skill that particular role needed, were provided by roles above it. The system was made up of a lot of specialists. Each specialist made only those decisions that were confined to its role and sought the decisions beyond their own role to be made outside, irrespective of its effect on the job at hand. (any government office anywhere in the world,is a good example of this approach, so are some public sector banks in India).
It worked fine for lot of years. But like any system, it developed abberations. The big bureaucrocy that these division of labour created started having bad effects on working of systems. Thats when Hammer-Champy came up with their revolutionary ideas on business process re-engineering. They proposed systems with generalists instead of specialists. These generalists were to be supported by enabling tools, such as improved IT systems, better collaboration tools such as fax, telephone, e-mails etc. They made multiple decisions, irresepctive of specilaization they had. They used enabling tools to make those decisions and in rare cases when they could not make those decisions, the job was transferred to actual specialist. This mode of getting job done has caught on and examples are everywhere to see. (Any private bank in India or anywhere in the world is an example of this, where bank teller supports all the functions from opening bank account to withdrawing cash).
This approach works, because it is only rarely one needs services of true specialist, as compared to, a generalist supported by enabling tools. One can see parallel between these and the way software development itself is evolving. The traditional SDLC with BDUF (big design upfront) follows a smithian approach. It expects a lot of specialists to collaborate to develop software. It needs a very heavy process support. That when you have ISO/CMM coming in.
Whereas Agile method appear to follow a Hammer-Champy approach to software development, with a slight variation. It relies on a multi-role specialist, instead of generalist. These multi-role specialists perform multiple roles themsalves. They are specialists in these multiple roles (either because of their training or experience or both), hence they dont need either a big process support or support from a lot of other specialists. The people who think this can kill outsourcing appear to base that conclusion on following logic. Since multi-role specilaists are in short supply and difficult to create, the outsourcers cannot have enough of them. Hence outsourcing will stop. Thats what the logic appears to be.
But as I had discussed in one of my previous posts about innovation shown by outsourcers, this one too can be handled innovatively. One can always replace the multi-role specialist with a generalist supported by enabling tools and achieve same result, as originally envisaged by Hammer-champy. One cannot beat support systems provided for such a genralist by large outsourcing organisations. The large outsourcing organisations have benefits of sharing humungous amount of knowledge, which even multi-role specialists dont have access to. So agile methods should not be viewed as just an antidote for outsourcing.
As outlined in one of my earlier posts, both these approaches (viz. agile and traditionla SDLC) are valid and are valuable in different contexts. The IT leaders need to choose appropriate methods based on their needs. As an Enterprise Architect, its my worry to provide appropriate governance controls in an uniform framework which works for both these approaches. It is of vital importance that I put these governance controls in place so that the roles make only those decisions they are empowered to make. Because it is very easy to confuse role boundaries in these two drastically different approaches.
For this we need to understand the philosophies behind getting a job done. The two prominent schools of thought are Adam Smith's one and Hammer-Champy one. Adam Smith had proposed a division of labour centric approach. To get any job done, the roles were clearly segregated. Each role needed to have specific skills. The other inputs apart from the skill that particular role needed, were provided by roles above it. The system was made up of a lot of specialists. Each specialist made only those decisions that were confined to its role and sought the decisions beyond their own role to be made outside, irrespective of its effect on the job at hand. (any government office anywhere in the world,is a good example of this approach, so are some public sector banks in India).
It worked fine for lot of years. But like any system, it developed abberations. The big bureaucrocy that these division of labour created started having bad effects on working of systems. Thats when Hammer-Champy came up with their revolutionary ideas on business process re-engineering. They proposed systems with generalists instead of specialists. These generalists were to be supported by enabling tools, such as improved IT systems, better collaboration tools such as fax, telephone, e-mails etc. They made multiple decisions, irresepctive of specilaization they had. They used enabling tools to make those decisions and in rare cases when they could not make those decisions, the job was transferred to actual specialist. This mode of getting job done has caught on and examples are everywhere to see. (Any private bank in India or anywhere in the world is an example of this, where bank teller supports all the functions from opening bank account to withdrawing cash).
This approach works, because it is only rarely one needs services of true specialist, as compared to, a generalist supported by enabling tools. One can see parallel between these and the way software development itself is evolving. The traditional SDLC with BDUF (big design upfront) follows a smithian approach. It expects a lot of specialists to collaborate to develop software. It needs a very heavy process support. That when you have ISO/CMM coming in.
Whereas Agile method appear to follow a Hammer-Champy approach to software development, with a slight variation. It relies on a multi-role specialist, instead of generalist. These multi-role specialists perform multiple roles themsalves. They are specialists in these multiple roles (either because of their training or experience or both), hence they dont need either a big process support or support from a lot of other specialists. The people who think this can kill outsourcing appear to base that conclusion on following logic. Since multi-role specilaists are in short supply and difficult to create, the outsourcers cannot have enough of them. Hence outsourcing will stop. Thats what the logic appears to be.
But as I had discussed in one of my previous posts about innovation shown by outsourcers, this one too can be handled innovatively. One can always replace the multi-role specialist with a generalist supported by enabling tools and achieve same result, as originally envisaged by Hammer-champy. One cannot beat support systems provided for such a genralist by large outsourcing organisations. The large outsourcing organisations have benefits of sharing humungous amount of knowledge, which even multi-role specialists dont have access to. So agile methods should not be viewed as just an antidote for outsourcing.
As outlined in one of my earlier posts, both these approaches (viz. agile and traditionla SDLC) are valid and are valuable in different contexts. The IT leaders need to choose appropriate methods based on their needs. As an Enterprise Architect, its my worry to provide appropriate governance controls in an uniform framework which works for both these approaches. It is of vital importance that I put these governance controls in place so that the roles make only those decisions they are empowered to make. Because it is very easy to confuse role boundaries in these two drastically different approaches.
Subscribe to:
Posts (Atom)