Normally when I refer to 'Oracle' on this blog, I refer to 'Enterprise IT Oracle'. This time I am however referring to Larry Ellison's Oracle Corporation on this blog. It is because its acquisition of one of the significant technologies of 21st century, viz. Java programming language and associated paraphernalia.
This is BIG. Large chunk of enterprise IT is done these days, using Java technology and it is assumed to be 'De facto' and 'open' standard. Now with Oracle's acquisition of Sun, the openness of Java is under question.
What should organisations with large Java investments do? Well IBM is one of those, with large Java investments. Most of IBM's Internet stack under WebSphere brand is based on Java. What would IBM do?
What would other hardware vendors like HP, Dell do? Do we see three hardware blocks emerging? Sun/Oracle, HP/Microsoft and IBM? Where would Cisco/Google servers fit in this scheme of things?
Continuing this analogy would we see three software blocks viz, Java, .Net and possibly IBM's own Java like technology (may be a rehashed EGL with its own byte code) emerging?
Will cloud providers be bound to one of these blocks? If these three blocks start erecting Chinese walls around themselves, in order to lock customers in, then how does one allow seamless movement from one 'cloud' provider to another?
What would be de-risking strategy for enterprises, to avoid a lock-in within one of these blocks, even in an on-premise scenario? Does MDA sound attractive now? Is it the time that enterprises start focusing on separating their intellectual property from execution platforms?
Interesting times. Time to activate your enterprise IT Oracle and see which parts of Enterprise Architecture needs re-looking.
Showing posts with label MDA. Show all posts
Showing posts with label MDA. Show all posts
Tuesday, April 21, 2009
Friday, February 27, 2009
Of TLAs and FLAs
In my previous organisation every employee used to have a three letter acronym (TLA) made from employee first, middle and last name, instead of an employee number. There was an anecdote about it as well. The company's then chief, who is also called "Father of the Indian Software Industry" had actually ordered a four letter acronym (FLA). But the software that was used to generate those acronyms, produced a nasty one for the chief. (If you know his name, you can imagine what that four letter word would have been). So he ordered it to be changed to a three letter one. (BTW, Many happy returns of the day Mr. Kohli).
It appears to be going in reverse within IT. In IT, there were a lot of TLAs. ERP, SCM, CRM, EAI, BPM and SOA to name a few you may have come across. Of late however the trend is moving towards FLAs, what with SaaS, PaaS and so on. However the most enduring acronym which survived the test of time is neither a three letter one nor a four letter one. It is actually a five letter one - RDBMS.
The reason it has survived for this long, is because it is more than an acronym. It is a well thought out piece of technology backed by solid science. It is not just an acronym coined by sales and marketing guys nor an industry analyst. This technology has proven to be easily standardised, exetensible and serving the vast array of requirements some of which was not even envisaged when technology was initially developed.
Sadly same cannot be said of all the technologies you see around these days. Many of them are rehash of old ideas in new packaging. They rely on finding the right audience at right time to proliferate and thrive on inherent problems they carry to generate more revenues for their owners.
Enterprise architects need to possess the vision to see thru the marketing fluff and reach the bare bones of technologies they are going to employ. Analysts can help to an extent, but the generic analysis may not be completely applicable in your situation. You need to equip your 'Oracle' function with this capability.
It appears to be going in reverse within IT. In IT, there were a lot of TLAs. ERP, SCM, CRM, EAI, BPM and SOA to name a few you may have come across. Of late however the trend is moving towards FLAs, what with SaaS, PaaS and so on. However the most enduring acronym which survived the test of time is neither a three letter one nor a four letter one. It is actually a five letter one - RDBMS.
The reason it has survived for this long, is because it is more than an acronym. It is a well thought out piece of technology backed by solid science. It is not just an acronym coined by sales and marketing guys nor an industry analyst. This technology has proven to be easily standardised, exetensible and serving the vast array of requirements some of which was not even envisaged when technology was initially developed.
Sadly same cannot be said of all the technologies you see around these days. Many of them are rehash of old ideas in new packaging. They rely on finding the right audience at right time to proliferate and thrive on inherent problems they carry to generate more revenues for their owners.
Enterprise architects need to possess the vision to see thru the marketing fluff and reach the bare bones of technologies they are going to employ. Analysts can help to an extent, but the generic analysis may not be completely applicable in your situation. You need to equip your 'Oracle' function with this capability.
Sunday, December 02, 2007
Model driven development
Todd has asked a few question about using models during development of software. Though his questions are from Business Process Model perspective, they apply generally to model driven development as a whole.
Since I have considerable experience in this area, I would like to comment.
In my opinion, modeling does not negate the need for continuous integration nor testing. Unless one can prove models to be correct with respect to requirements using theorem provers or similar technologies, testing is a must. (Writing those verifiable requirements will take you ages, though). And one does need to define appropriate unit in model driven development world for large enterprise class developments, to allow for parallel development. Continuous integration is one of the best practices one would not want to lose when multiple units are involved.
We had defined a full fledged model driven development methodology, with an elaborationist strategy, for developing enterprise class components. We modeled data and behaviour of a component as an object model and then it was elaborated in terms of business logic and rules, before being converted into deployable artifacts. We did it this way because business logic and rules were considered too procedural to be abstracted in terms of any usable modeling notation, but that has no bearing on discussion that follows. The methodology allowed for continuous integration during all phases of development. We had defined component as a unit for build and test. These units could be version controlled and tested as units. Since it was the complete software development methodology same models were refined from early analysis to late deployment. Component as a unit however made sense only during build and test phases. For requirement analysis and high level designs different kinds of units were required. This is because during analysis and design different roles access these artifacts and their needs are different than people who build and test.
Lesson 1: Units may differ during different phases of life cycles. This problem is unique to model driven techniques, because in non-model driven world there is no single unit which goes across all phases of life cycle. If you are using iterative methods this problem becomes even trickier to handle.
We found that models have a greater need for completeness than source code and cyclical dependencies cause problems. That is, equivalent of a 'forward declaration' is very difficult to define in model world, unless you are open to break the meta models. (e.g.) A class cannot have attributes without its data type being defined. And that data type being a class depending on first class to be ready. I am sure similar situation will arise in business process modeling too. This had a great implication on continuous integration, because these dependencies across units would lock everything in a synchronous step. It is good from quality perspective but is not very pragmatic. We had to devise something similar to 'forward declaration' for models. I think I can generalise this and say that it will apply to all model driven development which follows continuous integration.
We had our own configuration management repository for models. But one could use standard source control repository, provided tool vendor allows you to store modeling artifacts in a plain text format. (well some source code tools are tolerant of binary files as well, but you can't do 'diff' and 'merge'). Devising a proper granularity is tricky and point above should be kept in mind. Some tools inter operate well with each other and provide nice experience (e.g. Rational family of tools). Then your configuration management tools can help you do a meaningful 'diff' and 'merge' on models too.
Lesson 2: Appropriate configuration control tool is needed even in model driven development
Need for regression testing was higher because of point above. Every change would be rippled to every other part that is connected with it, marking it as changed. Traditional methods would then blindly mark all those artifacts for regression testing. Again it was good from quality perspective, not very pragmatic though. We had to make some changes in change management and testing strategy to make it optimal.
Lesson 3: Units need to be defined carefully to handle trade off between parallelism and testing effort during build phase.
In short model driven methods tend to replicate software development methodology that is used without models. Models provide a way to focus on key abstractions and not get distracted by all the 'noise' (for want of better word) that goes with working software. That 'noise' itself can be modeled and injected into your models, as cross cutting concerns. In fact based on my experience with this heavy-weight model driven approach, I came up with a lighter approach called 'Code is the model'. Which can even be generalised to 'Text Specification is the model' and this code v/s model dichotomy can be removed as far as software development methodology goes.
Now a days some modeling tools have their own run time platforms, so models execute directly on that platform. This would avoid a build step. But defining a usable and practical configurable unit is a must. Then defining a versioning policy for this unit and defining a unit & regression testing strategy cannot be avoided. When multiple such modeling tools with their own run time platforms are used, it would provide its own set of challenges in defining testable and configurable units. But that's a topic for another discussion!
Since I have considerable experience in this area, I would like to comment.
In my opinion, modeling does not negate the need for continuous integration nor testing. Unless one can prove models to be correct with respect to requirements using theorem provers or similar technologies, testing is a must. (Writing those verifiable requirements will take you ages, though). And one does need to define appropriate unit in model driven development world for large enterprise class developments, to allow for parallel development. Continuous integration is one of the best practices one would not want to lose when multiple units are involved.
We had defined a full fledged model driven development methodology, with an elaborationist strategy, for developing enterprise class components. We modeled data and behaviour of a component as an object model and then it was elaborated in terms of business logic and rules, before being converted into deployable artifacts. We did it this way because business logic and rules were considered too procedural to be abstracted in terms of any usable modeling notation, but that has no bearing on discussion that follows. The methodology allowed for continuous integration during all phases of development. We had defined component as a unit for build and test. These units could be version controlled and tested as units. Since it was the complete software development methodology same models were refined from early analysis to late deployment. Component as a unit however made sense only during build and test phases. For requirement analysis and high level designs different kinds of units were required. This is because during analysis and design different roles access these artifacts and their needs are different than people who build and test.
Lesson 1: Units may differ during different phases of life cycles. This problem is unique to model driven techniques, because in non-model driven world there is no single unit which goes across all phases of life cycle. If you are using iterative methods this problem becomes even trickier to handle.
We found that models have a greater need for completeness than source code and cyclical dependencies cause problems. That is, equivalent of a 'forward declaration' is very difficult to define in model world, unless you are open to break the meta models. (e.g.) A class cannot have attributes without its data type being defined. And that data type being a class depending on first class to be ready. I am sure similar situation will arise in business process modeling too. This had a great implication on continuous integration, because these dependencies across units would lock everything in a synchronous step. It is good from quality perspective but is not very pragmatic. We had to devise something similar to 'forward declaration' for models. I think I can generalise this and say that it will apply to all model driven development which follows continuous integration.
We had our own configuration management repository for models. But one could use standard source control repository, provided tool vendor allows you to store modeling artifacts in a plain text format. (well some source code tools are tolerant of binary files as well, but you can't do 'diff' and 'merge'). Devising a proper granularity is tricky and point above should be kept in mind. Some tools inter operate well with each other and provide nice experience (e.g. Rational family of tools). Then your configuration management tools can help you do a meaningful 'diff' and 'merge' on models too.
Lesson 2: Appropriate configuration control tool is needed even in model driven development
Need for regression testing was higher because of point above. Every change would be rippled to every other part that is connected with it, marking it as changed. Traditional methods would then blindly mark all those artifacts for regression testing. Again it was good from quality perspective, not very pragmatic though. We had to make some changes in change management and testing strategy to make it optimal.
Lesson 3: Units need to be defined carefully to handle trade off between parallelism and testing effort during build phase.
In short model driven methods tend to replicate software development methodology that is used without models. Models provide a way to focus on key abstractions and not get distracted by all the 'noise' (for want of better word) that goes with working software. That 'noise' itself can be modeled and injected into your models, as cross cutting concerns. In fact based on my experience with this heavy-weight model driven approach, I came up with a lighter approach called 'Code is the model'. Which can even be generalised to 'Text Specification is the model' and this code v/s model dichotomy can be removed as far as software development methodology goes.
Now a days some modeling tools have their own run time platforms, so models execute directly on that platform. This would avoid a build step. But defining a usable and practical configurable unit is a must. Then defining a versioning policy for this unit and defining a unit & regression testing strategy cannot be avoided. When multiple such modeling tools with their own run time platforms are used, it would provide its own set of challenges in defining testable and configurable units. But that's a topic for another discussion!
Monday, June 04, 2007
SOA - Necessary and sufficient
SOA is heralded as the 'must have' for business agility. I agree to a point. SOA is necessary but not sufficient to achieve the highest degree of business agility. Let me explain, why I think so.
In service oriented world, information systems try to be congruent with business world, providing information services in support of business services. The business organisations provide business services in order to carry out the business activities. These business services are steps within business activities and they use information services provided by underlying IT infrastructure.
However underlying IT infrastructure is not amenable to this business service oriented paradigm, fully. At implementation level, IT infrastructure has to deal with non-functional properties, such as responsiveness, scale, availability, latency, cost, skills availability etc. That imposes certain restrictions on implementations. E.g. For scale reason we normally separate behaviour and data. Behaviour (as represented in business logic and business rules) scales differently than data (and data stores - databases, file systems). That’s why in a typical distributed information system, there are more database servers than servers dedicated for executing business logic.
In service oriented world, information service provided by information systems need to mask such implementation issues. The idea that SOA will provide business agility will hold true, iff information services enable business services, use disparate information systems seamlessly. In SOA world, business services should lend themselves to rapid re-organisation and redeployment, in terms of business activity volumes, business responsiveness, speed of new product/service introduction etc.
The current thinking seems to be that a set of open standards, enabling integration between disparate information systems is all that is needed. With such integration mechanism, one can create a facade of a business service, using underlying disparate information systems. Hence the emphasis is on XML schemas, in-situ transformations, service choreography and to extent mediation [between required business service and provided information service(s)].
To me this is part of solution. It is the necessary condition but not sufficient.
As I had posted in past, one really does not know what should be granularity of information services. If you provide too granular information services, you would be better at reorganising but will be hard pressed to meet non-functional parameters. If you provide good enough services for current usage, satisfying non-functional parameters, you will have tough time reorganising. So for all practical purposes, for any business service change, there are possible information service related changes, rather than just reorganisation of information services.
That would mean, the agility of business service reorgnisation comes down to the change management in information systems. If you make pragmatic decisions in favour of speed of change, it leads to duplication and redundancy. If you try to keep your information systems pure and without redundancy, you sacrifice the speed of change.
So the key appears to be
That would be complete SOA for me.
In service oriented world, information systems try to be congruent with business world, providing information services in support of business services. The business organisations provide business services in order to carry out the business activities. These business services are steps within business activities and they use information services provided by underlying IT infrastructure.
However underlying IT infrastructure is not amenable to this business service oriented paradigm, fully. At implementation level, IT infrastructure has to deal with non-functional properties, such as responsiveness, scale, availability, latency, cost, skills availability etc. That imposes certain restrictions on implementations. E.g. For scale reason we normally separate behaviour and data. Behaviour (as represented in business logic and business rules) scales differently than data (and data stores - databases, file systems). That’s why in a typical distributed information system, there are more database servers than servers dedicated for executing business logic.
In service oriented world, information service provided by information systems need to mask such implementation issues. The idea that SOA will provide business agility will hold true, iff information services enable business services, use disparate information systems seamlessly. In SOA world, business services should lend themselves to rapid re-organisation and redeployment, in terms of business activity volumes, business responsiveness, speed of new product/service introduction etc.
The current thinking seems to be that a set of open standards, enabling integration between disparate information systems is all that is needed. With such integration mechanism, one can create a facade of a business service, using underlying disparate information systems. Hence the emphasis is on XML schemas, in-situ transformations, service choreography and to extent mediation [between required business service and provided information service(s)].
To me this is part of solution. It is the necessary condition but not sufficient.
As I had posted in past, one really does not know what should be granularity of information services. If you provide too granular information services, you would be better at reorganising but will be hard pressed to meet non-functional parameters. If you provide good enough services for current usage, satisfying non-functional parameters, you will have tough time reorganising. So for all practical purposes, for any business service change, there are possible information service related changes, rather than just reorganisation of information services.
That would mean, the agility of business service reorgnisation comes down to the change management in information systems. If you make pragmatic decisions in favour of speed of change, it leads to duplication and redundancy. If you try to keep your information systems pure and without redundancy, you sacrifice the speed of change.
So the key appears to be
getting your information services granularity just right for all possible reorganisation that would be needed by business. You cannot really know all possible business changes, but you can know up to a certain time horizon. So that you are just re-organising your information services rather than redeveloping.
if this is not possible or considered risky, you can take a re-factoring oriented approach. And incrementally build the service definitions.
and whenever you change information systems (because despite your best efforts business came up with a change that is not possible with current information service definition), use MDA or Software factories (or any other conceptual to implementation mapping technology) to effect the change from conceptual business services onto its IT implementation. This would bring down the time to make changes. And also would enable you to make pragmatic decisions, because even if there are duplications and redundancies at implementation level, the conceptual field is clean and pure.
That would be complete SOA for me.
Wednesday, December 27, 2006
Enterprise architecture in MDA terms
Recently one of my colleagues quipped about UML being nothing but pretty pictures. But at the same time he wanted to use MDA for EA. He pointed this documents as a good start.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
Thursday, November 30, 2006
No stereotyping please!
Long time ago I was a starry eyed (bit of exaggeration here) entrant into world of IT, when the IT revolution in India was about to begin. I was part of elite 'tools group', using translator technologies to build home grown tools for various projects that used to come our organisation's way. Amidst all those small projects a big depository from western world developed enough faith in us. It asked us to develop their complete software solution. The visionaries from my organisation did not do it in normal run-of-the-mill way. They decided to build home grown code generators, to insure consistent quality and created a factory model of development. I was one of the juniormost member of the team which built and maintained those tools.
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Subscribe to:
Posts (Atom)
Showing posts with label MDA. Show all posts
Showing posts with label MDA. Show all posts
Tuesday, April 21, 2009
Oracle's Java
Normally when I refer to 'Oracle' on this blog, I refer to 'Enterprise IT Oracle'. This time I am however referring to Larry Ellison's Oracle Corporation on this blog. It is because its acquisition of one of the significant technologies of 21st century, viz. Java programming language and associated paraphernalia.
This is BIG. Large chunk of enterprise IT is done these days, using Java technology and it is assumed to be 'De facto' and 'open' standard. Now with Oracle's acquisition of Sun, the openness of Java is under question.
What should organisations with large Java investments do? Well IBM is one of those, with large Java investments. Most of IBM's Internet stack under WebSphere brand is based on Java. What would IBM do?
What would other hardware vendors like HP, Dell do? Do we see three hardware blocks emerging? Sun/Oracle, HP/Microsoft and IBM? Where would Cisco/Google servers fit in this scheme of things?
Continuing this analogy would we see three software blocks viz, Java, .Net and possibly IBM's own Java like technology (may be a rehashed EGL with its own byte code) emerging?
Will cloud providers be bound to one of these blocks? If these three blocks start erecting Chinese walls around themselves, in order to lock customers in, then how does one allow seamless movement from one 'cloud' provider to another?
What would be de-risking strategy for enterprises, to avoid a lock-in within one of these blocks, even in an on-premise scenario? Does MDA sound attractive now? Is it the time that enterprises start focusing on separating their intellectual property from execution platforms?
Interesting times. Time to activate your enterprise IT Oracle and see which parts of Enterprise Architecture needs re-looking.
This is BIG. Large chunk of enterprise IT is done these days, using Java technology and it is assumed to be 'De facto' and 'open' standard. Now with Oracle's acquisition of Sun, the openness of Java is under question.
What should organisations with large Java investments do? Well IBM is one of those, with large Java investments. Most of IBM's Internet stack under WebSphere brand is based on Java. What would IBM do?
What would other hardware vendors like HP, Dell do? Do we see three hardware blocks emerging? Sun/Oracle, HP/Microsoft and IBM? Where would Cisco/Google servers fit in this scheme of things?
Continuing this analogy would we see three software blocks viz, Java, .Net and possibly IBM's own Java like technology (may be a rehashed EGL with its own byte code) emerging?
Will cloud providers be bound to one of these blocks? If these three blocks start erecting Chinese walls around themselves, in order to lock customers in, then how does one allow seamless movement from one 'cloud' provider to another?
What would be de-risking strategy for enterprises, to avoid a lock-in within one of these blocks, even in an on-premise scenario? Does MDA sound attractive now? Is it the time that enterprises start focusing on separating their intellectual property from execution platforms?
Interesting times. Time to activate your enterprise IT Oracle and see which parts of Enterprise Architecture needs re-looking.
Friday, February 27, 2009
Of TLAs and FLAs
In my previous organisation every employee used to have a three letter acronym (TLA) made from employee first, middle and last name, instead of an employee number. There was an anecdote about it as well. The company's then chief, who is also called "Father of the Indian Software Industry" had actually ordered a four letter acronym (FLA). But the software that was used to generate those acronyms, produced a nasty one for the chief. (If you know his name, you can imagine what that four letter word would have been). So he ordered it to be changed to a three letter one. (BTW, Many happy returns of the day Mr. Kohli).
It appears to be going in reverse within IT. In IT, there were a lot of TLAs. ERP, SCM, CRM, EAI, BPM and SOA to name a few you may have come across. Of late however the trend is moving towards FLAs, what with SaaS, PaaS and so on. However the most enduring acronym which survived the test of time is neither a three letter one nor a four letter one. It is actually a five letter one - RDBMS.
The reason it has survived for this long, is because it is more than an acronym. It is a well thought out piece of technology backed by solid science. It is not just an acronym coined by sales and marketing guys nor an industry analyst. This technology has proven to be easily standardised, exetensible and serving the vast array of requirements some of which was not even envisaged when technology was initially developed.
Sadly same cannot be said of all the technologies you see around these days. Many of them are rehash of old ideas in new packaging. They rely on finding the right audience at right time to proliferate and thrive on inherent problems they carry to generate more revenues for their owners.
Enterprise architects need to possess the vision to see thru the marketing fluff and reach the bare bones of technologies they are going to employ. Analysts can help to an extent, but the generic analysis may not be completely applicable in your situation. You need to equip your 'Oracle' function with this capability.
It appears to be going in reverse within IT. In IT, there were a lot of TLAs. ERP, SCM, CRM, EAI, BPM and SOA to name a few you may have come across. Of late however the trend is moving towards FLAs, what with SaaS, PaaS and so on. However the most enduring acronym which survived the test of time is neither a three letter one nor a four letter one. It is actually a five letter one - RDBMS.
The reason it has survived for this long, is because it is more than an acronym. It is a well thought out piece of technology backed by solid science. It is not just an acronym coined by sales and marketing guys nor an industry analyst. This technology has proven to be easily standardised, exetensible and serving the vast array of requirements some of which was not even envisaged when technology was initially developed.
Sadly same cannot be said of all the technologies you see around these days. Many of them are rehash of old ideas in new packaging. They rely on finding the right audience at right time to proliferate and thrive on inherent problems they carry to generate more revenues for their owners.
Enterprise architects need to possess the vision to see thru the marketing fluff and reach the bare bones of technologies they are going to employ. Analysts can help to an extent, but the generic analysis may not be completely applicable in your situation. You need to equip your 'Oracle' function with this capability.
Sunday, December 02, 2007
Model driven development
Todd has asked a few question about using models during development of software. Though his questions are from Business Process Model perspective, they apply generally to model driven development as a whole.
Since I have considerable experience in this area, I would like to comment.
In my opinion, modeling does not negate the need for continuous integration nor testing. Unless one can prove models to be correct with respect to requirements using theorem provers or similar technologies, testing is a must. (Writing those verifiable requirements will take you ages, though). And one does need to define appropriate unit in model driven development world for large enterprise class developments, to allow for parallel development. Continuous integration is one of the best practices one would not want to lose when multiple units are involved.
We had defined a full fledged model driven development methodology, with an elaborationist strategy, for developing enterprise class components. We modeled data and behaviour of a component as an object model and then it was elaborated in terms of business logic and rules, before being converted into deployable artifacts. We did it this way because business logic and rules were considered too procedural to be abstracted in terms of any usable modeling notation, but that has no bearing on discussion that follows. The methodology allowed for continuous integration during all phases of development. We had defined component as a unit for build and test. These units could be version controlled and tested as units. Since it was the complete software development methodology same models were refined from early analysis to late deployment. Component as a unit however made sense only during build and test phases. For requirement analysis and high level designs different kinds of units were required. This is because during analysis and design different roles access these artifacts and their needs are different than people who build and test.
Lesson 1: Units may differ during different phases of life cycles. This problem is unique to model driven techniques, because in non-model driven world there is no single unit which goes across all phases of life cycle. If you are using iterative methods this problem becomes even trickier to handle.
We found that models have a greater need for completeness than source code and cyclical dependencies cause problems. That is, equivalent of a 'forward declaration' is very difficult to define in model world, unless you are open to break the meta models. (e.g.) A class cannot have attributes without its data type being defined. And that data type being a class depending on first class to be ready. I am sure similar situation will arise in business process modeling too. This had a great implication on continuous integration, because these dependencies across units would lock everything in a synchronous step. It is good from quality perspective but is not very pragmatic. We had to devise something similar to 'forward declaration' for models. I think I can generalise this and say that it will apply to all model driven development which follows continuous integration.
We had our own configuration management repository for models. But one could use standard source control repository, provided tool vendor allows you to store modeling artifacts in a plain text format. (well some source code tools are tolerant of binary files as well, but you can't do 'diff' and 'merge'). Devising a proper granularity is tricky and point above should be kept in mind. Some tools inter operate well with each other and provide nice experience (e.g. Rational family of tools). Then your configuration management tools can help you do a meaningful 'diff' and 'merge' on models too.
Lesson 2: Appropriate configuration control tool is needed even in model driven development
Need for regression testing was higher because of point above. Every change would be rippled to every other part that is connected with it, marking it as changed. Traditional methods would then blindly mark all those artifacts for regression testing. Again it was good from quality perspective, not very pragmatic though. We had to make some changes in change management and testing strategy to make it optimal.
Lesson 3: Units need to be defined carefully to handle trade off between parallelism and testing effort during build phase.
In short model driven methods tend to replicate software development methodology that is used without models. Models provide a way to focus on key abstractions and not get distracted by all the 'noise' (for want of better word) that goes with working software. That 'noise' itself can be modeled and injected into your models, as cross cutting concerns. In fact based on my experience with this heavy-weight model driven approach, I came up with a lighter approach called 'Code is the model'. Which can even be generalised to 'Text Specification is the model' and this code v/s model dichotomy can be removed as far as software development methodology goes.
Now a days some modeling tools have their own run time platforms, so models execute directly on that platform. This would avoid a build step. But defining a usable and practical configurable unit is a must. Then defining a versioning policy for this unit and defining a unit & regression testing strategy cannot be avoided. When multiple such modeling tools with their own run time platforms are used, it would provide its own set of challenges in defining testable and configurable units. But that's a topic for another discussion!
Since I have considerable experience in this area, I would like to comment.
In my opinion, modeling does not negate the need for continuous integration nor testing. Unless one can prove models to be correct with respect to requirements using theorem provers or similar technologies, testing is a must. (Writing those verifiable requirements will take you ages, though). And one does need to define appropriate unit in model driven development world for large enterprise class developments, to allow for parallel development. Continuous integration is one of the best practices one would not want to lose when multiple units are involved.
We had defined a full fledged model driven development methodology, with an elaborationist strategy, for developing enterprise class components. We modeled data and behaviour of a component as an object model and then it was elaborated in terms of business logic and rules, before being converted into deployable artifacts. We did it this way because business logic and rules were considered too procedural to be abstracted in terms of any usable modeling notation, but that has no bearing on discussion that follows. The methodology allowed for continuous integration during all phases of development. We had defined component as a unit for build and test. These units could be version controlled and tested as units. Since it was the complete software development methodology same models were refined from early analysis to late deployment. Component as a unit however made sense only during build and test phases. For requirement analysis and high level designs different kinds of units were required. This is because during analysis and design different roles access these artifacts and their needs are different than people who build and test.
Lesson 1: Units may differ during different phases of life cycles. This problem is unique to model driven techniques, because in non-model driven world there is no single unit which goes across all phases of life cycle. If you are using iterative methods this problem becomes even trickier to handle.
We found that models have a greater need for completeness than source code and cyclical dependencies cause problems. That is, equivalent of a 'forward declaration' is very difficult to define in model world, unless you are open to break the meta models. (e.g.) A class cannot have attributes without its data type being defined. And that data type being a class depending on first class to be ready. I am sure similar situation will arise in business process modeling too. This had a great implication on continuous integration, because these dependencies across units would lock everything in a synchronous step. It is good from quality perspective but is not very pragmatic. We had to devise something similar to 'forward declaration' for models. I think I can generalise this and say that it will apply to all model driven development which follows continuous integration.
We had our own configuration management repository for models. But one could use standard source control repository, provided tool vendor allows you to store modeling artifacts in a plain text format. (well some source code tools are tolerant of binary files as well, but you can't do 'diff' and 'merge'). Devising a proper granularity is tricky and point above should be kept in mind. Some tools inter operate well with each other and provide nice experience (e.g. Rational family of tools). Then your configuration management tools can help you do a meaningful 'diff' and 'merge' on models too.
Lesson 2: Appropriate configuration control tool is needed even in model driven development
Need for regression testing was higher because of point above. Every change would be rippled to every other part that is connected with it, marking it as changed. Traditional methods would then blindly mark all those artifacts for regression testing. Again it was good from quality perspective, not very pragmatic though. We had to make some changes in change management and testing strategy to make it optimal.
Lesson 3: Units need to be defined carefully to handle trade off between parallelism and testing effort during build phase.
In short model driven methods tend to replicate software development methodology that is used without models. Models provide a way to focus on key abstractions and not get distracted by all the 'noise' (for want of better word) that goes with working software. That 'noise' itself can be modeled and injected into your models, as cross cutting concerns. In fact based on my experience with this heavy-weight model driven approach, I came up with a lighter approach called 'Code is the model'. Which can even be generalised to 'Text Specification is the model' and this code v/s model dichotomy can be removed as far as software development methodology goes.
Now a days some modeling tools have their own run time platforms, so models execute directly on that platform. This would avoid a build step. But defining a usable and practical configurable unit is a must. Then defining a versioning policy for this unit and defining a unit & regression testing strategy cannot be avoided. When multiple such modeling tools with their own run time platforms are used, it would provide its own set of challenges in defining testable and configurable units. But that's a topic for another discussion!
Labels:
BPM,
Configuration management,
MDA,
MDD,
Model driven development
Monday, June 04, 2007
SOA - Necessary and sufficient
SOA is heralded as the 'must have' for business agility. I agree to a point. SOA is necessary but not sufficient to achieve the highest degree of business agility. Let me explain, why I think so.
In service oriented world, information systems try to be congruent with business world, providing information services in support of business services. The business organisations provide business services in order to carry out the business activities. These business services are steps within business activities and they use information services provided by underlying IT infrastructure.
However underlying IT infrastructure is not amenable to this business service oriented paradigm, fully. At implementation level, IT infrastructure has to deal with non-functional properties, such as responsiveness, scale, availability, latency, cost, skills availability etc. That imposes certain restrictions on implementations. E.g. For scale reason we normally separate behaviour and data. Behaviour (as represented in business logic and business rules) scales differently than data (and data stores - databases, file systems). That’s why in a typical distributed information system, there are more database servers than servers dedicated for executing business logic.
In service oriented world, information service provided by information systems need to mask such implementation issues. The idea that SOA will provide business agility will hold true, iff information services enable business services, use disparate information systems seamlessly. In SOA world, business services should lend themselves to rapid re-organisation and redeployment, in terms of business activity volumes, business responsiveness, speed of new product/service introduction etc.
The current thinking seems to be that a set of open standards, enabling integration between disparate information systems is all that is needed. With such integration mechanism, one can create a facade of a business service, using underlying disparate information systems. Hence the emphasis is on XML schemas, in-situ transformations, service choreography and to extent mediation [between required business service and provided information service(s)].
To me this is part of solution. It is the necessary condition but not sufficient.
As I had posted in past, one really does not know what should be granularity of information services. If you provide too granular information services, you would be better at reorganising but will be hard pressed to meet non-functional parameters. If you provide good enough services for current usage, satisfying non-functional parameters, you will have tough time reorganising. So for all practical purposes, for any business service change, there are possible information service related changes, rather than just reorganisation of information services.
That would mean, the agility of business service reorgnisation comes down to the change management in information systems. If you make pragmatic decisions in favour of speed of change, it leads to duplication and redundancy. If you try to keep your information systems pure and without redundancy, you sacrifice the speed of change.
So the key appears to be
That would be complete SOA for me.
In service oriented world, information systems try to be congruent with business world, providing information services in support of business services. The business organisations provide business services in order to carry out the business activities. These business services are steps within business activities and they use information services provided by underlying IT infrastructure.
However underlying IT infrastructure is not amenable to this business service oriented paradigm, fully. At implementation level, IT infrastructure has to deal with non-functional properties, such as responsiveness, scale, availability, latency, cost, skills availability etc. That imposes certain restrictions on implementations. E.g. For scale reason we normally separate behaviour and data. Behaviour (as represented in business logic and business rules) scales differently than data (and data stores - databases, file systems). That’s why in a typical distributed information system, there are more database servers than servers dedicated for executing business logic.
In service oriented world, information service provided by information systems need to mask such implementation issues. The idea that SOA will provide business agility will hold true, iff information services enable business services, use disparate information systems seamlessly. In SOA world, business services should lend themselves to rapid re-organisation and redeployment, in terms of business activity volumes, business responsiveness, speed of new product/service introduction etc.
The current thinking seems to be that a set of open standards, enabling integration between disparate information systems is all that is needed. With such integration mechanism, one can create a facade of a business service, using underlying disparate information systems. Hence the emphasis is on XML schemas, in-situ transformations, service choreography and to extent mediation [between required business service and provided information service(s)].
To me this is part of solution. It is the necessary condition but not sufficient.
As I had posted in past, one really does not know what should be granularity of information services. If you provide too granular information services, you would be better at reorganising but will be hard pressed to meet non-functional parameters. If you provide good enough services for current usage, satisfying non-functional parameters, you will have tough time reorganising. So for all practical purposes, for any business service change, there are possible information service related changes, rather than just reorganisation of information services.
That would mean, the agility of business service reorgnisation comes down to the change management in information systems. If you make pragmatic decisions in favour of speed of change, it leads to duplication and redundancy. If you try to keep your information systems pure and without redundancy, you sacrifice the speed of change.
So the key appears to be
getting your information services granularity just right for all possible reorganisation that would be needed by business. You cannot really know all possible business changes, but you can know up to a certain time horizon. So that you are just re-organising your information services rather than redeveloping.
if this is not possible or considered risky, you can take a re-factoring oriented approach. And incrementally build the service definitions.
and whenever you change information systems (because despite your best efforts business came up with a change that is not possible with current information service definition), use MDA or Software factories (or any other conceptual to implementation mapping technology) to effect the change from conceptual business services onto its IT implementation. This would bring down the time to make changes. And also would enable you to make pragmatic decisions, because even if there are duplications and redundancies at implementation level, the conceptual field is clean and pure.
That would be complete SOA for me.
Wednesday, December 27, 2006
Enterprise architecture in MDA terms
Recently one of my colleagues quipped about UML being nothing but pretty pictures. But at the same time he wanted to use MDA for EA. He pointed this documents as a good start.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
I feel he was wrong about UML being nothing more than pretty pictures. It has a meta-model behind it. Which can be extended and used in ways you want. I myself have extended UML to capture object relational mappings and generated a lot of code. Given misconceptiosn about UML, no wonder there is big resistance to MDA to be used as means, in enterprise IT scenario. But things are changing. Now there are attempts to make MDA means for enterprise architecture, definition and deployment. There are a few challenges in achieving this, though.
I often wanted models for every cell of Zachman framework. For me the downward journey within a column of Zachman f/w is that of model refinement and horizontal journey is that of model traceability. However Zachman f/w is just a framework. To be useful, it need to be fleshed out. The canvas is very big. So those EA within an enterprise, who believe in MDA, should take upon themsalves couple of responsibilities. a) to create models for every cell of zachman f/w and prove model transformations do work, by refining models down the cells. And b) they must create a body of knowledge on deploying and governing these models, across the f/w. How to fit this in normal IT governance f/w and secure funding is a big challenge. For that I propose that we must first use 'model driven development' (MDD for short) to prove value of MDA like approach.
MDA is a big culture shock for most IT organisation, precisely because everyone out there thinks UML is nothing but pretty pictures. Those who believe in MDA need to start small and prove value of MDA approach then only we can go to next level, that is making EA congruent with MDA. Using MDD is a very good way to begin proving value of MDA, unless you find organisations which are sold on MDA to begin with. In short this is a very tough ask and lot of farming/hunting is required to nurture the approach. Being a practioner I would suggest to try this approach out on small scale, in non mission-critical projects.
Another problem we might face is that the models are rigid way of capturing knowledge. They are suitable for more defined aspect of entrprise architecture (ie. all circles of TOGAF framework) but are not suitable for more fluid aspects (like business and IT strategies as required in a few cells of Zachman f/w). So from TOGAF f/w persepctive they are OK but not from Zachman f/w persepctive. To be used with Zachman f/w we may have to augment MDA with something more, some sort of unstructured knowledge repository. But this is long way of and can be ignored for time being.
I find it good that interest in MDA is reinvigorating.
Thursday, November 30, 2006
No stereotyping please!
Long time ago I was a starry eyed (bit of exaggeration here) entrant into world of IT, when the IT revolution in India was about to begin. I was part of elite 'tools group', using translator technologies to build home grown tools for various projects that used to come our organisation's way. Amidst all those small projects a big depository from western world developed enough faith in us. It asked us to develop their complete software solution. The visionaries from my organisation did not do it in normal run-of-the-mill way. They decided to build home grown code generators, to insure consistent quality and created a factory model of development. I was one of the juniormost member of the team which built and maintained those tools.
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Then while working for another project for large british telecom company (oops! could not hide the name), another visionary from my organisation did put this factory model in practice, in a geographically seperate way and delivered tremendous cost savings. That was the first truely offshored project done by my organisation. The tools we had developed helped a lot, in sending the requirements offshore - in model form and getting code back, to be tested onsite. We provided consistent quality and on time delivery. Needless to say it was a huge success and more business came our way. Mind you, it was much before Y2K made Indian outsourcers a big hit.
During my days in tools group I had good fortune to attend a seminar by Prof. K. V. Nori. His speciality is Translator Technologies and he taught at CMU. He exahaulted us, to 'Generate the generator!' Coming from compiler building background, it was natural for him to say 'Generate the generator!' But for me it was like 11th commandment. It captivated me. We did try to generate the generator. During my MasterCraft days, I convinced two of my senior colleagues and together we designed a language called 'specL'. 'specL' now has become the basis of our efforts on 'MOF Model to Text standard' under OMG's initiative. This is a testimony to the fact that we are not just cheap labour suppliers. We are good enough to be thought leaders within global IT.
It was not all cheap labour that helped us succeed in outsourcing business. It was also innovation, grit and determination. Thats why it pains me when somebody stereotypes Indian outsourcers as 'sub-optimal' or India as 'sub-optimal' location. Firstly, I dont like stereotyping and secondly its a wrong stereotype. One can have a position opposing outsourcing, offshoring, what have you. There are enough arguments against outsourcing, but please dont denigrate a group as sub-optimal.
And if I am going to be stereotyped anyway, then please include me in a group of "all men who are six feet tall, handsome, left handed, father of cute four year old". Then I may not feel as bad, being called sub-optimal. (Well, handsome and left handed are aspirational adjectives distant from reality).
Subscribe to:
Posts (Atom)