Long time ago I was a starry eyed (bit of exaggeration here) entrant into world of IT, when the IT revolution in India was about to begin. I was part of elite 'tools group', using translator technologies to build home grown tools for various projects that used to come our organisation's way. Amidst all those small projects a big depository from western world developed enough faith in us. It asked us to develop their complete software solution. The visionaries from my organisation did not do it in normal run-of-the-mill way. They decided to build home grown code generators, to insure consistent quality and created a factory model of development. I was one of the juniormost member of the team which built and maintained those tools.
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Thursday, November 30, 2006
Monday, November 27, 2006
SOA in enterprises or Hype 2.0
If dot com in enterprises was hype 1.0 then surely SOA in enterprises is coming very close to becoming hype 2.0 . The way SOA has been touted as next best thing to happen to mankind since sliced bread brings it closer to that dubious distinction. The vendors are promising all kinds of things from flexibility, adaptability, re-use to lower costs if you use their merchandise to do SOA. SOA is good as long as decision makers can seperate hype from reality. I for one will be very saddened if SOA goes the some way as dot com hype. Following discussion is to seperate hype from reality so that decision makers have correct expectation, to enable them to move along the path of sustainable SOA.
1. Myth of reusable services
In my experience as architect I have never seen as-is reuse of a business service implementation. Some amount of refactoring is needed for it to be reused. The refactored business service actually harbours multiple services under a common facade. For a service to be as-is reusable it needs to be so fine grained that it will have problems related to non-functional attributes of services. Just to give an example, if I had a business service providing customer details along with his holding details given a customer identity, then I have couple of options in its implementation.
I) I can build it as a composite service composed of more granualar services for customer detail and holding detail.
II) I can build a monolithic service for providing both customer and holding details
Now remember the lesson we learnt in managing the data. Always do the join at the source of data, because at the source you know more about actual data and can do many more optimisations compared to away from source. (Remember the old adage don't do join in memory let RDBMS handle it?). So from a non-functional perspective (scalability and performance), option II) is very attractive and some times mandatory.
No doubt, option I) gives me more re-usable service. But it still does not give me absolutely reusable service impementation. For example if I need the customer details with holding details for three different kinds of execution scenario, viz.
a) an on-line application for customer service,
b) a batch application to create mass mailer and
c) a business intelligence application to understand customer behaviour (with holding as one of the parameters).
Even though I have more granular services, all of them are not usable in all these different execution context. I cannot simply call the granular services in a loop to get the bulk data needed for scenario b) and c) above. So the re-usability is restricted by execution context.Of-course you can throw hardware at this problem, to solve it. But then your costs escalate and any savings you made by reusing software will be more than offset by hardware costs. So just because you organise your software in terms of services (which essentially specifies the contract between user and supplier and nothing more), you are not going to get re-usability. It will enable re-usability within an execution context but not universal re-use. So if we treat Services as explicit contract specification between users and suppliers then we should attempt to reuse these contracts. This however does not automatically translate to implementation reuse.
2. Myth of composite applications
This myth is related to the myth above. In most other engineering disciplines, the real world components are standardized and higher level solutions are typically component assembly problems. Not so in software. Even if we have services, their assembly does not necssarily behave within accepted parameters, even though a single service might behave OK. So composing implementations, to arrive at a solution is not so straight forward. Many vendors will have you believe that if you use their software, most of your software development will reduce to assembly of services. This is not true for following reasons. What is the correct granularity and definition of services is known to user orgnisation than vendor. These service defintions are dictated by user organisation business practices and policies. Each organisation is different, so a vendor can never supply you those service definitions. If a vendor does not know how the services look like and what their properties should be, how on earth is he going to guarantee that composition of such services will behave in desired manner? And as outlined in point above, the implementation reuse is a big problem. So even on that front vendors can not help you. So the composite application will remain a myth for some time now. The vendor sales and marketing machinery will show you mickey mouse applications built using composite apps scenario. But demand to see atleast two productionized composite apps, where majority of constituent services of apps are shared between those two. My guarantee is, you wont find any.
So is SOA a BIG hype about nothing. Not exactly. It does provide following benefits.
1. Manageability of software with business alignment
The single most important contribution of SOA is that it connects software with business. In an SOA approach, one can make sure that all software is aligned with business needs, because all software is traceable to their business needs. The whole edifice of building, maintaining and measuring utility of software will revolve around business services in an SOA approach. So it becomes easier to maintain focus on business benefits (or lack thereof) of software. With the traceability it provides, software becomes a manageable entity from being an unwieldy and randomly interconnected monolith. And there is some reuse possible in terms of non-functional services (such as security, authentication, personlisation etc.).
2. Ability to seperate concepts from implementation
The next important contribution of SOA approach is the focus it brings on seperating interface from implementation. The logical extension of this approach is to seperate conceptual elements from platform elements. So if you are using SOA approach towards software development, you have necessary inputs to create a large scale conceptul model of your business. You just need to filter out platform specific stuff from the interfaces you defined. You can further distill these interface specifications to seperate data and behaviour aspects. These are really reusable bits within your business. It is very easy to figure out how exactly these reusable bits can be implemented on different implementation platforms. This will give you necessary jump start for your journey towards a conceptual IT world.
So in my opinion SOA is good and it is the way to go. But not for the reasons stated by vendors. It is not going to make software drastically cheaper nor going to make software development drastically faster. Its just a small step in a long journey towards making enterprise software an entity managed by business folks rather than IT folks.
1. Myth of reusable services
In my experience as architect I have never seen as-is reuse of a business service implementation. Some amount of refactoring is needed for it to be reused. The refactored business service actually harbours multiple services under a common facade. For a service to be as-is reusable it needs to be so fine grained that it will have problems related to non-functional attributes of services. Just to give an example, if I had a business service providing customer details along with his holding details given a customer identity, then I have couple of options in its implementation.
I) I can build it as a composite service composed of more granualar services for customer detail and holding detail.
II) I can build a monolithic service for providing both customer and holding details
Now remember the lesson we learnt in managing the data. Always do the join at the source of data, because at the source you know more about actual data and can do many more optimisations compared to away from source. (Remember the old adage don't do join in memory let RDBMS handle it?). So from a non-functional perspective (scalability and performance), option II) is very attractive and some times mandatory.
No doubt, option I) gives me more re-usable service. But it still does not give me absolutely reusable service impementation. For example if I need the customer details with holding details for three different kinds of execution scenario, viz.
a) an on-line application for customer service,
b) a batch application to create mass mailer and
c) a business intelligence application to understand customer behaviour (with holding as one of the parameters).
Even though I have more granular services, all of them are not usable in all these different execution context. I cannot simply call the granular services in a loop to get the bulk data needed for scenario b) and c) above. So the re-usability is restricted by execution context.Of-course you can throw hardware at this problem, to solve it. But then your costs escalate and any savings you made by reusing software will be more than offset by hardware costs. So just because you organise your software in terms of services (which essentially specifies the contract between user and supplier and nothing more), you are not going to get re-usability. It will enable re-usability within an execution context but not universal re-use. So if we treat Services as explicit contract specification between users and suppliers then we should attempt to reuse these contracts. This however does not automatically translate to implementation reuse.
2. Myth of composite applications
This myth is related to the myth above. In most other engineering disciplines, the real world components are standardized and higher level solutions are typically component assembly problems. Not so in software. Even if we have services, their assembly does not necssarily behave within accepted parameters, even though a single service might behave OK. So composing implementations, to arrive at a solution is not so straight forward. Many vendors will have you believe that if you use their software, most of your software development will reduce to assembly of services. This is not true for following reasons. What is the correct granularity and definition of services is known to user orgnisation than vendor. These service defintions are dictated by user organisation business practices and policies. Each organisation is different, so a vendor can never supply you those service definitions. If a vendor does not know how the services look like and what their properties should be, how on earth is he going to guarantee that composition of such services will behave in desired manner? And as outlined in point above, the implementation reuse is a big problem. So even on that front vendors can not help you. So the composite application will remain a myth for some time now. The vendor sales and marketing machinery will show you mickey mouse applications built using composite apps scenario. But demand to see atleast two productionized composite apps, where majority of constituent services of apps are shared between those two. My guarantee is, you wont find any.
So is SOA a BIG hype about nothing. Not exactly. It does provide following benefits.
1. Manageability of software with business alignment
The single most important contribution of SOA is that it connects software with business. In an SOA approach, one can make sure that all software is aligned with business needs, because all software is traceable to their business needs. The whole edifice of building, maintaining and measuring utility of software will revolve around business services in an SOA approach. So it becomes easier to maintain focus on business benefits (or lack thereof) of software. With the traceability it provides, software becomes a manageable entity from being an unwieldy and randomly interconnected monolith. And there is some reuse possible in terms of non-functional services (such as security, authentication, personlisation etc.).
2. Ability to seperate concepts from implementation
The next important contribution of SOA approach is the focus it brings on seperating interface from implementation. The logical extension of this approach is to seperate conceptual elements from platform elements. So if you are using SOA approach towards software development, you have necessary inputs to create a large scale conceptul model of your business. You just need to filter out platform specific stuff from the interfaces you defined. You can further distill these interface specifications to seperate data and behaviour aspects. These are really reusable bits within your business. It is very easy to figure out how exactly these reusable bits can be implemented on different implementation platforms. This will give you necessary jump start for your journey towards a conceptual IT world.
So in my opinion SOA is good and it is the way to go. But not for the reasons stated by vendors. It is not going to make software drastically cheaper nor going to make software development drastically faster. Its just a small step in a long journey towards making enterprise software an entity managed by business folks rather than IT folks.
Tuesday, November 07, 2006
Agile, Iterative or Waterfall?
There is been a lot of interest and mis-conceptions about various life cycle methods for solution development. Please note carefully I am saying solution development and not software development. Enterprises develop solutions to the problems. The software content of the solution is developed by IT sub-organisation. The rest of it is assigned to different sub-organisations within enterprise. So when we discuss software development life cycle methods (I'll use short form SDLC henceforth), we must remember solution development lifecycle methods (I'll use SolDev as short form, henceforth) as well. A software development and deployment method has to synchronize with solution development and deployment method.
There are various SDLC methods in vogue. Waterfall method has been in use for ages and has its supporters and detractors. Iterative methods originated some time back and are in use in many enterprises. Agile method is the newest kid on the block and yet to make serious inroads into enterprise IT scenario.
Waterfall is a sequential method, waiting for previous phase to finish completely and expects it to deliver a signed and sealed deliverable. This deliverable is enhanced in the next phase till software gets delivered. It assumes that the requirements are well understood and wont change during software development. It is most risky of development approaches and has quite a large failure rate.
Iterative method is iterative as it's name suggests. It creates a initial, fully functional version of system and iteratively adds functionality to it to make it complete. During each iteration it also takes into account user's feedback for the earlier delivered functionality and corrects the implementation.
Agile method is a more aggressive version of iterative method, where timelines are shorter and sacrosanct. It also believes in face to face communication rather than written documentation.
Each of them has their own strengths and weaknesses. And whether to choose one over other is not a trivial decision.
A solution development method is normally iterative or sometimes waterfall but rarely agile. Normally solution development and deployment involve dealing with real life things and they are not as soft as software. That may explain why they dont use agile methods that much.
Typically quick-fix and operational solutions rarely involve a big solution design and deployment effort. Major effort is consumed in software development and deployment. Hence agile methods can be deployed as SolDev method. whereas tactical and strategic solutions involve a significant solution design and deployment effort so an iterative method appears a right choice for SolDev. Modern enterprises rarely use waterfall method as it is too fraught with risk. Again I am referring to intent of the solution and not the systems, when I say operational, tactical or strategic.
For example,
If you were to repair a leaking window in your house. You would call the tradesman, interact with him and get the job done in a day or two. You will give constant feedabck and get it done as you want. This is a quick-fix solution and agile method can be (so to say) SDLC method.
Whereas if you were to add a conservatory to your house, you may have to interact with lots of tradesmen (or you outsource to a contractor), you have to worry about changing furniture setting in your house and may have to change nature of the games in your kid's birthday party. Thats a tactical solution and can hardly be agile. You may iterate over the development of this solution, by first building the conservatory then adding the furniture and relocating existing furniture. You also have to think about new games to include in birthday party, which take advantage of the conservatory and furniture settings. Here actual building of conservatory is like building software and other things you do is part of solution development and deployment. Both these need to follow same life cycle methods otherwise you'll have problems. And agile method for both SDLC and SolDev wont work because you would not have bandwidth to support sofwtare development (i.e. building conservatory) as well as solution development (i.e. doing other stuff such as buying new furniture, relocating old one ). And just SDLC can't be agile because rest of the solution will not be ready anyway.
Same goes about building a new house altogether. Thats a strategic solution. and you would still want an iteartive solution. Build the basic structure of the house. Add all utilities, then interiors and finally finishing. Constantly giving feedback and checking for yourself how the house is getting built.
You were to do it in waterfall model. You would call in a contractor tell him what you want and hope he does it before your required date. Well, if it is something as standardised as house building and contactor is reliable you may consider this option.
So its quite clear that different life cycle methods are suitable for different kinds of SolDev and SDLC. They have their strenghts, but need to be deployed in right kind of scenario. An enterprise architect needs to define the decision framework for making this choice, within an enterprise.
There are various SDLC methods in vogue. Waterfall method has been in use for ages and has its supporters and detractors. Iterative methods originated some time back and are in use in many enterprises. Agile method is the newest kid on the block and yet to make serious inroads into enterprise IT scenario.
Waterfall is a sequential method, waiting for previous phase to finish completely and expects it to deliver a signed and sealed deliverable. This deliverable is enhanced in the next phase till software gets delivered. It assumes that the requirements are well understood and wont change during software development. It is most risky of development approaches and has quite a large failure rate.
Iterative method is iterative as it's name suggests. It creates a initial, fully functional version of system and iteratively adds functionality to it to make it complete. During each iteration it also takes into account user's feedback for the earlier delivered functionality and corrects the implementation.
Agile method is a more aggressive version of iterative method, where timelines are shorter and sacrosanct. It also believes in face to face communication rather than written documentation.
Each of them has their own strengths and weaknesses. And whether to choose one over other is not a trivial decision.
A solution development method is normally iterative or sometimes waterfall but rarely agile. Normally solution development and deployment involve dealing with real life things and they are not as soft as software. That may explain why they dont use agile methods that much.
Typically quick-fix and operational solutions rarely involve a big solution design and deployment effort. Major effort is consumed in software development and deployment. Hence agile methods can be deployed as SolDev method. whereas tactical and strategic solutions involve a significant solution design and deployment effort so an iterative method appears a right choice for SolDev. Modern enterprises rarely use waterfall method as it is too fraught with risk. Again I am referring to intent of the solution and not the systems, when I say operational, tactical or strategic.
For example,
If you were to repair a leaking window in your house. You would call the tradesman, interact with him and get the job done in a day or two. You will give constant feedabck and get it done as you want. This is a quick-fix solution and agile method can be (so to say) SDLC method.
Whereas if you were to add a conservatory to your house, you may have to interact with lots of tradesmen (or you outsource to a contractor), you have to worry about changing furniture setting in your house and may have to change nature of the games in your kid's birthday party. Thats a tactical solution and can hardly be agile. You may iterate over the development of this solution, by first building the conservatory then adding the furniture and relocating existing furniture. You also have to think about new games to include in birthday party, which take advantage of the conservatory and furniture settings. Here actual building of conservatory is like building software and other things you do is part of solution development and deployment. Both these need to follow same life cycle methods otherwise you'll have problems. And agile method for both SDLC and SolDev wont work because you would not have bandwidth to support sofwtare development (i.e. building conservatory) as well as solution development (i.e. doing other stuff such as buying new furniture, relocating old one ). And just SDLC can't be agile because rest of the solution will not be ready anyway.
Same goes about building a new house altogether. Thats a strategic solution. and you would still want an iteartive solution. Build the basic structure of the house. Add all utilities, then interiors and finally finishing. Constantly giving feedback and checking for yourself how the house is getting built.
You were to do it in waterfall model. You would call in a contractor tell him what you want and hope he does it before your required date. Well, if it is something as standardised as house building and contactor is reliable you may consider this option.
So its quite clear that different life cycle methods are suitable for different kinds of SolDev and SDLC. They have their strenghts, but need to be deployed in right kind of scenario. An enterprise architect needs to define the decision framework for making this choice, within an enterprise.
To reuse or not to reuse?
As soon as I posted about reuse, a old colleague of mine did want to reuse a small piece of code developed by me quite a long while back.
It was nothing great. When we were attempting to make client server connection to CICS in good old days, we were hitting the limits on CICS COMMAREA. We thought if we compress our message, we would not hit the limit. Since it was over-the-wire message all content was required to be non-binary. So one place where we thought we can save space was if we packed integers into higher base representation, because those messages had a lot of integers. So a base 64 representation of integers would take 6 digits as against 9 in a base 10 representation, and still would be capable of going over wire. This piece of code was required to be as optimal as possible. So we had developed a function which would pack integers into base 64 representation. And we had used a trick to make it faster, by taking advantage of EBCDIC representation. It is part of the library we supply along with our MDA toolset.
My colleague wanted to reuse the code, albeit in a different scenario, as it was well tested and is in production for so many years. Needless to say he would have fallen flat on his face, if had blind faith in the code. It would have failed because it relied on EBCDIC representation and he was trying to deploy it in a non-EBCDIC setting.
Why am I narrating this story? It just remphasises my point about implementation reuse. Well, even with best of intentions and support, implmentation reuse is not as easy as it looks. My colleague was lucky to have me around. Who would think such 50 lines of code can go so horribly wrong in a different execution context. If we had seperated the concept from implementation in this case, and generated an implementation for my colleague's execution context it might have worked. But without that he has to do refactoring of code. Which may wipe out gains he may have received by reusing. I am not sure how I could have made that piece of code more reusable than what it is, without breaking non-functional constraint imposed on it by its own execution context.
Now with refactoring we could have a piece code which is more reusable than what it was, but my colleague would have to spend that effort to make it so. It depends whether he has that kind of investment available to him. And it still wont guarantee that it wont fail somebody else's requirement in a totally different execution context. It is making me more convinced that either have concept reuse Or be prepared for refactoring while reuse. And dont expect 100% reuse.
It was nothing great. When we were attempting to make client server connection to CICS in good old days, we were hitting the limits on CICS COMMAREA. We thought if we compress our message, we would not hit the limit. Since it was over-the-wire message all content was required to be non-binary. So one place where we thought we can save space was if we packed integers into higher base representation, because those messages had a lot of integers. So a base 64 representation of integers would take 6 digits as against 9 in a base 10 representation, and still would be capable of going over wire. This piece of code was required to be as optimal as possible. So we had developed a function which would pack integers into base 64 representation. And we had used a trick to make it faster, by taking advantage of EBCDIC representation. It is part of the library we supply along with our MDA toolset.
My colleague wanted to reuse the code, albeit in a different scenario, as it was well tested and is in production for so many years. Needless to say he would have fallen flat on his face, if had blind faith in the code. It would have failed because it relied on EBCDIC representation and he was trying to deploy it in a non-EBCDIC setting.
Why am I narrating this story? It just remphasises my point about implementation reuse. Well, even with best of intentions and support, implmentation reuse is not as easy as it looks. My colleague was lucky to have me around. Who would think such 50 lines of code can go so horribly wrong in a different execution context. If we had seperated the concept from implementation in this case, and generated an implementation for my colleague's execution context it might have worked. But without that he has to do refactoring of code. Which may wipe out gains he may have received by reusing. I am not sure how I could have made that piece of code more reusable than what it is, without breaking non-functional constraint imposed on it by its own execution context.
Now with refactoring we could have a piece code which is more reusable than what it was, but my colleague would have to spend that effort to make it so. It depends whether he has that kind of investment available to him. And it still wont guarantee that it wont fail somebody else's requirement in a totally different execution context. It is making me more convinced that either have concept reuse Or be prepared for refactoring while reuse. And dont expect 100% reuse.
Subscribe to:
Posts (Atom)
Thursday, November 30, 2006
No stereotyping please!
Long time ago I was a starry eyed (bit of exaggeration here) entrant into world of IT, when the IT revolution in India was about to begin. I was part of elite 'tools group', using translator technologies to build home grown tools for various projects that used to come our organisation's way. Amidst all those small projects a big depository from western world developed enough faith in us. It asked us to develop their complete software solution. The visionaries from my organisation did not do it in normal run-of-the-mill way. They decided to build home grown code generators, to insure consistent quality and created a factory model of development. I was one of the juniormost member of the team which built and maintained those tools.
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Monday, November 27, 2006
SOA in enterprises or Hype 2.0
If dot com in enterprises was hype 1.0 then surely SOA in enterprises is coming very close to becoming hype 2.0 . The way SOA has been touted as next best thing to happen to mankind since sliced bread brings it closer to that dubious distinction. The vendors are promising all kinds of things from flexibility, adaptability, re-use to lower costs if you use their merchandise to do SOA. SOA is good as long as decision makers can seperate hype from reality. I for one will be very saddened if SOA goes the some way as dot com hype. Following discussion is to seperate hype from reality so that decision makers have correct expectation, to enable them to move along the path of sustainable SOA.
1. Myth of reusable services
In my experience as architect I have never seen as-is reuse of a business service implementation. Some amount of refactoring is needed for it to be reused. The refactored business service actually harbours multiple services under a common facade. For a service to be as-is reusable it needs to be so fine grained that it will have problems related to non-functional attributes of services. Just to give an example, if I had a business service providing customer details along with his holding details given a customer identity, then I have couple of options in its implementation.
I) I can build it as a composite service composed of more granualar services for customer detail and holding detail.
II) I can build a monolithic service for providing both customer and holding details
Now remember the lesson we learnt in managing the data. Always do the join at the source of data, because at the source you know more about actual data and can do many more optimisations compared to away from source. (Remember the old adage don't do join in memory let RDBMS handle it?). So from a non-functional perspective (scalability and performance), option II) is very attractive and some times mandatory.
No doubt, option I) gives me more re-usable service. But it still does not give me absolutely reusable service impementation. For example if I need the customer details with holding details for three different kinds of execution scenario, viz.
a) an on-line application for customer service,
b) a batch application to create mass mailer and
c) a business intelligence application to understand customer behaviour (with holding as one of the parameters).
Even though I have more granular services, all of them are not usable in all these different execution context. I cannot simply call the granular services in a loop to get the bulk data needed for scenario b) and c) above. So the re-usability is restricted by execution context.Of-course you can throw hardware at this problem, to solve it. But then your costs escalate and any savings you made by reusing software will be more than offset by hardware costs. So just because you organise your software in terms of services (which essentially specifies the contract between user and supplier and nothing more), you are not going to get re-usability. It will enable re-usability within an execution context but not universal re-use. So if we treat Services as explicit contract specification between users and suppliers then we should attempt to reuse these contracts. This however does not automatically translate to implementation reuse.
2. Myth of composite applications
This myth is related to the myth above. In most other engineering disciplines, the real world components are standardized and higher level solutions are typically component assembly problems. Not so in software. Even if we have services, their assembly does not necssarily behave within accepted parameters, even though a single service might behave OK. So composing implementations, to arrive at a solution is not so straight forward. Many vendors will have you believe that if you use their software, most of your software development will reduce to assembly of services. This is not true for following reasons. What is the correct granularity and definition of services is known to user orgnisation than vendor. These service defintions are dictated by user organisation business practices and policies. Each organisation is different, so a vendor can never supply you those service definitions. If a vendor does not know how the services look like and what their properties should be, how on earth is he going to guarantee that composition of such services will behave in desired manner? And as outlined in point above, the implementation reuse is a big problem. So even on that front vendors can not help you. So the composite application will remain a myth for some time now. The vendor sales and marketing machinery will show you mickey mouse applications built using composite apps scenario. But demand to see atleast two productionized composite apps, where majority of constituent services of apps are shared between those two. My guarantee is, you wont find any.
So is SOA a BIG hype about nothing. Not exactly. It does provide following benefits.
1. Manageability of software with business alignment
The single most important contribution of SOA is that it connects software with business. In an SOA approach, one can make sure that all software is aligned with business needs, because all software is traceable to their business needs. The whole edifice of building, maintaining and measuring utility of software will revolve around business services in an SOA approach. So it becomes easier to maintain focus on business benefits (or lack thereof) of software. With the traceability it provides, software becomes a manageable entity from being an unwieldy and randomly interconnected monolith. And there is some reuse possible in terms of non-functional services (such as security, authentication, personlisation etc.).
2. Ability to seperate concepts from implementation
The next important contribution of SOA approach is the focus it brings on seperating interface from implementation. The logical extension of this approach is to seperate conceptual elements from platform elements. So if you are using SOA approach towards software development, you have necessary inputs to create a large scale conceptul model of your business. You just need to filter out platform specific stuff from the interfaces you defined. You can further distill these interface specifications to seperate data and behaviour aspects. These are really reusable bits within your business. It is very easy to figure out how exactly these reusable bits can be implemented on different implementation platforms. This will give you necessary jump start for your journey towards a conceptual IT world.
So in my opinion SOA is good and it is the way to go. But not for the reasons stated by vendors. It is not going to make software drastically cheaper nor going to make software development drastically faster. Its just a small step in a long journey towards making enterprise software an entity managed by business folks rather than IT folks.
1. Myth of reusable services
In my experience as architect I have never seen as-is reuse of a business service implementation. Some amount of refactoring is needed for it to be reused. The refactored business service actually harbours multiple services under a common facade. For a service to be as-is reusable it needs to be so fine grained that it will have problems related to non-functional attributes of services. Just to give an example, if I had a business service providing customer details along with his holding details given a customer identity, then I have couple of options in its implementation.
I) I can build it as a composite service composed of more granualar services for customer detail and holding detail.
II) I can build a monolithic service for providing both customer and holding details
Now remember the lesson we learnt in managing the data. Always do the join at the source of data, because at the source you know more about actual data and can do many more optimisations compared to away from source. (Remember the old adage don't do join in memory let RDBMS handle it?). So from a non-functional perspective (scalability and performance), option II) is very attractive and some times mandatory.
No doubt, option I) gives me more re-usable service. But it still does not give me absolutely reusable service impementation. For example if I need the customer details with holding details for three different kinds of execution scenario, viz.
a) an on-line application for customer service,
b) a batch application to create mass mailer and
c) a business intelligence application to understand customer behaviour (with holding as one of the parameters).
Even though I have more granular services, all of them are not usable in all these different execution context. I cannot simply call the granular services in a loop to get the bulk data needed for scenario b) and c) above. So the re-usability is restricted by execution context.Of-course you can throw hardware at this problem, to solve it. But then your costs escalate and any savings you made by reusing software will be more than offset by hardware costs. So just because you organise your software in terms of services (which essentially specifies the contract between user and supplier and nothing more), you are not going to get re-usability. It will enable re-usability within an execution context but not universal re-use. So if we treat Services as explicit contract specification between users and suppliers then we should attempt to reuse these contracts. This however does not automatically translate to implementation reuse.
2. Myth of composite applications
This myth is related to the myth above. In most other engineering disciplines, the real world components are standardized and higher level solutions are typically component assembly problems. Not so in software. Even if we have services, their assembly does not necssarily behave within accepted parameters, even though a single service might behave OK. So composing implementations, to arrive at a solution is not so straight forward. Many vendors will have you believe that if you use their software, most of your software development will reduce to assembly of services. This is not true for following reasons. What is the correct granularity and definition of services is known to user orgnisation than vendor. These service defintions are dictated by user organisation business practices and policies. Each organisation is different, so a vendor can never supply you those service definitions. If a vendor does not know how the services look like and what their properties should be, how on earth is he going to guarantee that composition of such services will behave in desired manner? And as outlined in point above, the implementation reuse is a big problem. So even on that front vendors can not help you. So the composite application will remain a myth for some time now. The vendor sales and marketing machinery will show you mickey mouse applications built using composite apps scenario. But demand to see atleast two productionized composite apps, where majority of constituent services of apps are shared between those two. My guarantee is, you wont find any.
So is SOA a BIG hype about nothing. Not exactly. It does provide following benefits.
1. Manageability of software with business alignment
The single most important contribution of SOA is that it connects software with business. In an SOA approach, one can make sure that all software is aligned with business needs, because all software is traceable to their business needs. The whole edifice of building, maintaining and measuring utility of software will revolve around business services in an SOA approach. So it becomes easier to maintain focus on business benefits (or lack thereof) of software. With the traceability it provides, software becomes a manageable entity from being an unwieldy and randomly interconnected monolith. And there is some reuse possible in terms of non-functional services (such as security, authentication, personlisation etc.).
2. Ability to seperate concepts from implementation
The next important contribution of SOA approach is the focus it brings on seperating interface from implementation. The logical extension of this approach is to seperate conceptual elements from platform elements. So if you are using SOA approach towards software development, you have necessary inputs to create a large scale conceptul model of your business. You just need to filter out platform specific stuff from the interfaces you defined. You can further distill these interface specifications to seperate data and behaviour aspects. These are really reusable bits within your business. It is very easy to figure out how exactly these reusable bits can be implemented on different implementation platforms. This will give you necessary jump start for your journey towards a conceptual IT world.
So in my opinion SOA is good and it is the way to go. But not for the reasons stated by vendors. It is not going to make software drastically cheaper nor going to make software development drastically faster. Its just a small step in a long journey towards making enterprise software an entity managed by business folks rather than IT folks.
Tuesday, November 07, 2006
Agile, Iterative or Waterfall?
There is been a lot of interest and mis-conceptions about various life cycle methods for solution development. Please note carefully I am saying solution development and not software development. Enterprises develop solutions to the problems. The software content of the solution is developed by IT sub-organisation. The rest of it is assigned to different sub-organisations within enterprise. So when we discuss software development life cycle methods (I'll use short form SDLC henceforth), we must remember solution development lifecycle methods (I'll use SolDev as short form, henceforth) as well. A software development and deployment method has to synchronize with solution development and deployment method.
There are various SDLC methods in vogue. Waterfall method has been in use for ages and has its supporters and detractors. Iterative methods originated some time back and are in use in many enterprises. Agile method is the newest kid on the block and yet to make serious inroads into enterprise IT scenario.
Waterfall is a sequential method, waiting for previous phase to finish completely and expects it to deliver a signed and sealed deliverable. This deliverable is enhanced in the next phase till software gets delivered. It assumes that the requirements are well understood and wont change during software development. It is most risky of development approaches and has quite a large failure rate.
Iterative method is iterative as it's name suggests. It creates a initial, fully functional version of system and iteratively adds functionality to it to make it complete. During each iteration it also takes into account user's feedback for the earlier delivered functionality and corrects the implementation.
Agile method is a more aggressive version of iterative method, where timelines are shorter and sacrosanct. It also believes in face to face communication rather than written documentation.
Each of them has their own strengths and weaknesses. And whether to choose one over other is not a trivial decision.
A solution development method is normally iterative or sometimes waterfall but rarely agile. Normally solution development and deployment involve dealing with real life things and they are not as soft as software. That may explain why they dont use agile methods that much.
Typically quick-fix and operational solutions rarely involve a big solution design and deployment effort. Major effort is consumed in software development and deployment. Hence agile methods can be deployed as SolDev method. whereas tactical and strategic solutions involve a significant solution design and deployment effort so an iterative method appears a right choice for SolDev. Modern enterprises rarely use waterfall method as it is too fraught with risk. Again I am referring to intent of the solution and not the systems, when I say operational, tactical or strategic.
For example,
If you were to repair a leaking window in your house. You would call the tradesman, interact with him and get the job done in a day or two. You will give constant feedabck and get it done as you want. This is a quick-fix solution and agile method can be (so to say) SDLC method.
Whereas if you were to add a conservatory to your house, you may have to interact with lots of tradesmen (or you outsource to a contractor), you have to worry about changing furniture setting in your house and may have to change nature of the games in your kid's birthday party. Thats a tactical solution and can hardly be agile. You may iterate over the development of this solution, by first building the conservatory then adding the furniture and relocating existing furniture. You also have to think about new games to include in birthday party, which take advantage of the conservatory and furniture settings. Here actual building of conservatory is like building software and other things you do is part of solution development and deployment. Both these need to follow same life cycle methods otherwise you'll have problems. And agile method for both SDLC and SolDev wont work because you would not have bandwidth to support sofwtare development (i.e. building conservatory) as well as solution development (i.e. doing other stuff such as buying new furniture, relocating old one ). And just SDLC can't be agile because rest of the solution will not be ready anyway.
Same goes about building a new house altogether. Thats a strategic solution. and you would still want an iteartive solution. Build the basic structure of the house. Add all utilities, then interiors and finally finishing. Constantly giving feedback and checking for yourself how the house is getting built.
You were to do it in waterfall model. You would call in a contractor tell him what you want and hope he does it before your required date. Well, if it is something as standardised as house building and contactor is reliable you may consider this option.
So its quite clear that different life cycle methods are suitable for different kinds of SolDev and SDLC. They have their strenghts, but need to be deployed in right kind of scenario. An enterprise architect needs to define the decision framework for making this choice, within an enterprise.
There are various SDLC methods in vogue. Waterfall method has been in use for ages and has its supporters and detractors. Iterative methods originated some time back and are in use in many enterprises. Agile method is the newest kid on the block and yet to make serious inroads into enterprise IT scenario.
Waterfall is a sequential method, waiting for previous phase to finish completely and expects it to deliver a signed and sealed deliverable. This deliverable is enhanced in the next phase till software gets delivered. It assumes that the requirements are well understood and wont change during software development. It is most risky of development approaches and has quite a large failure rate.
Iterative method is iterative as it's name suggests. It creates a initial, fully functional version of system and iteratively adds functionality to it to make it complete. During each iteration it also takes into account user's feedback for the earlier delivered functionality and corrects the implementation.
Agile method is a more aggressive version of iterative method, where timelines are shorter and sacrosanct. It also believes in face to face communication rather than written documentation.
Each of them has their own strengths and weaknesses. And whether to choose one over other is not a trivial decision.
A solution development method is normally iterative or sometimes waterfall but rarely agile. Normally solution development and deployment involve dealing with real life things and they are not as soft as software. That may explain why they dont use agile methods that much.
Typically quick-fix and operational solutions rarely involve a big solution design and deployment effort. Major effort is consumed in software development and deployment. Hence agile methods can be deployed as SolDev method. whereas tactical and strategic solutions involve a significant solution design and deployment effort so an iterative method appears a right choice for SolDev. Modern enterprises rarely use waterfall method as it is too fraught with risk. Again I am referring to intent of the solution and not the systems, when I say operational, tactical or strategic.
For example,
If you were to repair a leaking window in your house. You would call the tradesman, interact with him and get the job done in a day or two. You will give constant feedabck and get it done as you want. This is a quick-fix solution and agile method can be (so to say) SDLC method.
Whereas if you were to add a conservatory to your house, you may have to interact with lots of tradesmen (or you outsource to a contractor), you have to worry about changing furniture setting in your house and may have to change nature of the games in your kid's birthday party. Thats a tactical solution and can hardly be agile. You may iterate over the development of this solution, by first building the conservatory then adding the furniture and relocating existing furniture. You also have to think about new games to include in birthday party, which take advantage of the conservatory and furniture settings. Here actual building of conservatory is like building software and other things you do is part of solution development and deployment. Both these need to follow same life cycle methods otherwise you'll have problems. And agile method for both SDLC and SolDev wont work because you would not have bandwidth to support sofwtare development (i.e. building conservatory) as well as solution development (i.e. doing other stuff such as buying new furniture, relocating old one ). And just SDLC can't be agile because rest of the solution will not be ready anyway.
Same goes about building a new house altogether. Thats a strategic solution. and you would still want an iteartive solution. Build the basic structure of the house. Add all utilities, then interiors and finally finishing. Constantly giving feedback and checking for yourself how the house is getting built.
You were to do it in waterfall model. You would call in a contractor tell him what you want and hope he does it before your required date. Well, if it is something as standardised as house building and contactor is reliable you may consider this option.
So its quite clear that different life cycle methods are suitable for different kinds of SolDev and SDLC. They have their strenghts, but need to be deployed in right kind of scenario. An enterprise architect needs to define the decision framework for making this choice, within an enterprise.
To reuse or not to reuse?
As soon as I posted about reuse, a old colleague of mine did want to reuse a small piece of code developed by me quite a long while back.
It was nothing great. When we were attempting to make client server connection to CICS in good old days, we were hitting the limits on CICS COMMAREA. We thought if we compress our message, we would not hit the limit. Since it was over-the-wire message all content was required to be non-binary. So one place where we thought we can save space was if we packed integers into higher base representation, because those messages had a lot of integers. So a base 64 representation of integers would take 6 digits as against 9 in a base 10 representation, and still would be capable of going over wire. This piece of code was required to be as optimal as possible. So we had developed a function which would pack integers into base 64 representation. And we had used a trick to make it faster, by taking advantage of EBCDIC representation. It is part of the library we supply along with our MDA toolset.
My colleague wanted to reuse the code, albeit in a different scenario, as it was well tested and is in production for so many years. Needless to say he would have fallen flat on his face, if had blind faith in the code. It would have failed because it relied on EBCDIC representation and he was trying to deploy it in a non-EBCDIC setting.
Why am I narrating this story? It just remphasises my point about implementation reuse. Well, even with best of intentions and support, implmentation reuse is not as easy as it looks. My colleague was lucky to have me around. Who would think such 50 lines of code can go so horribly wrong in a different execution context. If we had seperated the concept from implementation in this case, and generated an implementation for my colleague's execution context it might have worked. But without that he has to do refactoring of code. Which may wipe out gains he may have received by reusing. I am not sure how I could have made that piece of code more reusable than what it is, without breaking non-functional constraint imposed on it by its own execution context.
Now with refactoring we could have a piece code which is more reusable than what it was, but my colleague would have to spend that effort to make it so. It depends whether he has that kind of investment available to him. And it still wont guarantee that it wont fail somebody else's requirement in a totally different execution context. It is making me more convinced that either have concept reuse Or be prepared for refactoring while reuse. And dont expect 100% reuse.
It was nothing great. When we were attempting to make client server connection to CICS in good old days, we were hitting the limits on CICS COMMAREA. We thought if we compress our message, we would not hit the limit. Since it was over-the-wire message all content was required to be non-binary. So one place where we thought we can save space was if we packed integers into higher base representation, because those messages had a lot of integers. So a base 64 representation of integers would take 6 digits as against 9 in a base 10 representation, and still would be capable of going over wire. This piece of code was required to be as optimal as possible. So we had developed a function which would pack integers into base 64 representation. And we had used a trick to make it faster, by taking advantage of EBCDIC representation. It is part of the library we supply along with our MDA toolset.
My colleague wanted to reuse the code, albeit in a different scenario, as it was well tested and is in production for so many years. Needless to say he would have fallen flat on his face, if had blind faith in the code. It would have failed because it relied on EBCDIC representation and he was trying to deploy it in a non-EBCDIC setting.
Why am I narrating this story? It just remphasises my point about implementation reuse. Well, even with best of intentions and support, implmentation reuse is not as easy as it looks. My colleague was lucky to have me around. Who would think such 50 lines of code can go so horribly wrong in a different execution context. If we had seperated the concept from implementation in this case, and generated an implementation for my colleague's execution context it might have worked. But without that he has to do refactoring of code. Which may wipe out gains he may have received by reusing. I am not sure how I could have made that piece of code more reusable than what it is, without breaking non-functional constraint imposed on it by its own execution context.
Now with refactoring we could have a piece code which is more reusable than what it was, but my colleague would have to spend that effort to make it so. It depends whether he has that kind of investment available to him. And it still wont guarantee that it wont fail somebody else's requirement in a totally different execution context. It is making me more convinced that either have concept reuse Or be prepared for refactoring while reuse. And dont expect 100% reuse.
Subscribe to:
Posts (Atom)