Monday, June 27, 2016

Is Cloud ready for the mainstream?

I wrote about the Cloud approach to IT four years ago in this blog. At the time, I wanted to urge caution, particularly in relation to Production implementation. 

Four years on, how have things changed?

I think the Cloud has finally Grown Up. Always popular with development teams and leading-edge (or should that be bleeding-edge?) deployments, is it now time to see Cloud provisioning as the de-facto approach to Infrastructure requirements?

It could be argued that, for commercial organisations, Cloud really started with Software-as-a-Service (SaaS). Companies such as Salesforce.Com established the concept of outsourcing non-core business processes. (Although I suspect my sales & marketing colleagues would take exception to being categorised as "non-core"!) Within IT itself, cloud-based Service Desks such as Service-Now began to eat into traditional service desk markets. 

Subsequently, the concept of Platform-as-a-Service (PaaS) began to be exploited by Development teams who wanted to spin up "Build" and "Test" environments quickly. 

Compute environments were (and still are) another valuable use-case; make use of massive cpu capacity to do data analytics or asset pricing, saving having to purchase on-premise processing. This, plus the idea of just provisioning Storage (Infrastructure-as-a-Service, or IaaS), is where the questions of security and reliability came in. 

Put simply; why should a company trust a third part supplier to look after its confidential data? To answer that question is to address the heart of Cloud, be it IaaS, PaaS, or SaaS. 

In my view, the questions of Security are now being addressed. Many companies now conform to rigorous security rules regarding data isolation, "Chinese walls" and other practices so that even some Banks are now prepared to trust their secure data to a Cloud service. 

Reliability and Availability are also being addressed. However, this does require a different philosophy to infrastructure. The approach is to view servers not as "pets" (having individual attributes, and to be nursed back to health if they get sick) but rather to treat them as "cattle" - herds of identical attributes. If one gets sick, you just kill it and use another one. But this does mean that Applications need a totally different approach. 

If you want your application to be able to run on a Cloud solution, you need to recognise that whilst the environment itself may be stable, individual components themselves might fail. This is much more of an "organic" approach to resilience, compared with the older "technocratic" approach of ensuring resiliency by ensuring availability of each and every component. 

So the new approach involves:

overall infrastructure is "stateless", and runs very small "micro" ACID (Atomic, Consistent, Idolated,Durable) transactions. 
- each transaction takes minimal elapsed time and can run on any host. 
- very simple persistent storage mechanisms are used to store user "state" where necessary. 
- failure of infrastructure does not lead to failure of applications. 

Of course, all the good things we have always demanded from infrastructure (security, availability,reliability, supportability, etc. ) must still apply. 

But, in the Cloud world, we deliver them in a different way - using micro-services hosted on anonymous farms of infrastructure. 

Under this new philosophical approach, the focus moves to Supplier Management. Chose you Cloud Supplier with great care - your business data is in their hands. 

Friday, March 16, 2012

Six Essentials for Effective IT Strategy


IT Strategy initiatives frequently fail to deliver tangible benefits; organisations remain unchanged, costs continue to rise, value-add is diminished, and morale falls. 
Despite this, it is possible to create a genuinely  effective IT Strategy; one that will result in pragmatic actionable recommendations to transform your organisation from “chaotic” to “managed”.
Developing an effective IT Strategy can be fraught with difficulties; it is not uncommon to meet organisations having the results of a Strategic Review, embellished by charts, graphs and well-meaning policies, who then struggle to convert these good ideas into practical action. And without practical action, the company sees no tangible benefits.

The temptation is to abandon the Strategy entirely, and revert to tried-and-trusted behaviour. After a short interval, the organisation falls back into “making it up as we go along”, or “Just do it” methodology. Neither of these extremes are optimal for an IT organisation. Neither are they inevitable.
Our experience shows that there are practical ways to transition an IT organisation from “chaotic, ad-hoc, individual heroics” to “managed” or even “optimising” ( i ), by addressing six major areas.  Some of these are familiar to IT Service Management professionals, others are taken from the wider world of Management Consultancy. All of them are fundamental, focused, and clear to implement.

1. Control what you Measure
It is ironic that the IT industry, guardian of most things numeric, can ignore collecting and publishing numbers about it’s own activity. Yet statistical data is an essential pre-requisite for modern management. 
As an IT Manager do you (and your business) know the numbers of servers you are managing, what they are used for, how much storage they use, how many support issues are being resolved etc.? 
More importantly, do you know how these figures are changing? Are you currently managing twice as many virtual machines as two years ago? With more or less staff? And what projects and initiatives are your team working on?
Capturing Key Performance Indicators or Metrics enables us to:
    • explain to the Business (our paymasters) what the IT Team is actually doing
    • justify existing and future IT expenditure
    • engage with Business sponsors to address issues with “problem applications”. 
    • improve internal planning and control resource allocation.
Metrics help us ensure that we are “doing things right” in terms of allocation of effort, and to demonstrate to business stakeholders that we are “doing the right things”.
2. Deploy appropriate Software Tools
Most software vendors accept that the sheer complexity of Managing IT requires a variety of different management tools. So we need a clearly defined strategy for choosing, implementing and integrating them.

Consider the following typical concerns; do you have multiple software tools that do the same thing? Do you have different software products using different naming conventions for the same thing? Can you extract holistic reports across different software tools? 
It is essential that we focus on the two primary reasons for using Software Tools: (1) to capture metrics on the status of IT, it’s assets, costs and activity, and (2) to automate the actual IT support function itself. 
A Software Tools approach incorporates the following:
    • “golden references” that hold master data which is replicated to other tools.
    • clear ownership of tools and skills to run them.
    • maximum utilisation and maximum return on software investment.
Without this, we risk purchasing duplicated products, creating “shelfware”, or adding to the complexity of the environment we are trying to manage.
An integrated Software tools approach enables us to maximise the return on our investment in Management software, and to scale our capability to handle future growth of IT and the business.
3. Establish and Improve Processes
The majority of IT Managers are aware of the necessity of proper Processes, regulatory requirements such “SOX” and best practices such as ITIL and ISO-20000. Unfortunately, poorly designed processes can hinder, rather than help. Processes can become a bottleneck, rather than an enabler. Alternatively, a poorly implemented “Agile” approach to organisational behaviour can lead to quality issues, particularly in the long term.
Our experience in this area suggests IT Managers can address this dilemma by formulating a Strategy which focuses on:
    • Building on existing behaviours rather than enforcing textbook patterns,
    • Improvement techniques such as LEAN and Six-Sigma,
    • Processes that deliver measurable “outputs” to the business,
A pragmatic Process Improvement strategy enables us to introduce an integrated set of processes, resulting in (a) Reduction in costs and increased efficiency, (b)Predictability, better resource planning and estimating, and (c) Repeatability, Audibility, and Verifiability.
Such a process improvement strategy enables us work “smarter”, and to deliver IT services faster, more responsively, and at a more predictable cost. 
4. Standardise Technology Choices
Many IT organisations exist without an Infrastructure “road map” for the technologies which they will operate in the future. This can result in:
    • Legacy applications with high maintenance costs
    • Heterogeneous data centre technologies, 
    • Scarcity of resources capable of supporting old infrastructure
    • A multiplicity of different support teams, and duplication of support effort.
    • Spiralling costs of technology refresh.
These issues can be addressed early, without having to resort to periodic “crisis management” technology refreshes.
The first phase of the Standardisation strategy is to define the “Production Readiness” criteria for your organisation. This means describing the importance of Scalability, Resilience, Security, Management and Supportability you require, as part of the Enterprise Architecture Framework.
Production Readiness enables IT Managers to “score” applications (whether purchased or in-house) for their suitability for deployment. 
Once a mechanism is in place to assess supportability, then an appropriate costing mechanism can be implemented, so that these costs are exposed when project decisions are taken.
A Production Standards strategy ensures that technologies are chosen on the basis of their long-term supportability. As a result, business IT decisions take into account the true cost of IT. Cost transparency leads to trust, which can lead to more focused IT investment.
5. Build People and Teams
Most IT managers accept that managing IT is as much about managing people as it is about the technology. Unfortunately, they are often given little opportunity to develop skills for managing and motivating people, other than through the hard knocks of doing the job, the sink or swim approach to team and people development.
The IT industry in general has a strong reputation for investing in technical training, but frequently does not invest in the “people skills” needed for managing teams.
Using a framework such as the Action-Centred Leadership model ( ii ), a Management Coaching and Personality Profiling initiative can help managers to improve their performance in the three core areas:
    • achieving the task
    • managing the team or group
    • managing individuals
This approach helps Managers recognise the benefits of really understanding their team, and to learn how to capitalise on the diversity of skills and personalities that they are responsible for. This, in turn, leads to significant productivity gains as well as reduction in staff “churn”. 

Leveraging the services of a Management Coach and Personality Profiling is therefore an essential part of an effective IT Strategy.
6. Energise the  IT  Culture
IT departments often  see themselves  as nothing more than a cost centre or a slave of the real business. In order to drive forward the potential benefits of IT, the IT department needs to see itself as a business in it’s own right, with its own strategy, values, skill, expertise and passion. 

This can be achieved by:
    • building a distinctive culture through team communications
    • envisioning the team by a shared strategy
    • acknowledging success and challenges as a common experience.
By addressing these cultural changes, the IT organisation will rightly see itself as a “profit centre”, rather than a “cost centre”. 
As it values itself and behaves differently as a result, then its customers, (the business sponsors) will start to  see it as  valued advisors, subject matter experts and enablers of business change.

Deliver Cost-effective IT
Experience tells us that it is possible to have an IT Strategy, which:
    • is practical and workable.
    • is easy to communicate and gives genuine improvements.
    • uses proven techniques from IT and Quality Management in a systematic way.
    • creatively combines solutions to build a better future.
For IT professionals, faced with 21st century challenges (Cloud Computing, Cyber Security, Green Imperatives, to name a few), an Effective IT Strategy is essential.
This original article was written by Dennis Adams, and appears in the TWENTY:12 Enhance Your IT Strategy Yearbook of the BCS - The Chartered Institute for IT.  Text reproduced by permission of the BCS and ATALink Ltd. 

i) The terms “chaotic, ad-hoc, individual heroics”, “managed” and “optimising” are taken from the Maturity Levels descriptions in the Capability Maturity Model. CMM is registered Carnegie Mellon University (CMU), which led to the creation of the Software Engineering Institute (SEI).
ii) The Action Centred Leadership model was developed by John Adair, who is also the author of over 40 books on management and leadership, including Effective Leadership, Not Bosses but Leaders, and Great Leaders.

Tuesday, January 17, 2012

The Cloud won’t solve your Management Issues


I like the idea of “cloud computing”; in our own consultancy business we use shared services for most of our key business processes. But I don’t think the cloud will solve some of the problems people had hoped for.

Of course, I may be just over-cautious.

Having worked in IT for a number of years, I have to fight the temptation to resist the “next big thing” which promises to solve all our IT problems. To make matters worse, I used to work in the sharp end of Software Sales, so I have seen over-zealous marketing glossing over the practical challenges of implementing the latest solution.

But this time my concern is more fundamental; It is based on our core motivation for embracing the Cloud concept in the first place.


Outsourcing compute processing, even core data, brings risks - security risks, performance and capacity risks to name a few. The justification for cloud computing is that the benefits outweigh the risks.

The attraction of the cloud approach is that someone else looks after hardware provisioning, capacity planning, availability management etc. etc. leaving you to get on with running your business processes.

So what if you need more capacity? Just pay more - Simple! Or is it?

The fact is that no technology provider has unlimited resources, and if you choose a supplier which is not the “right size” for you, you could end up with excess costs or (worse) a provider that cannot provision to your needs.

So you still need to think about Capacity Management.

The Cloud enables you to outsource technology. But there is no such thing as outsourcing responsibility.

For some, the attraction of moving to the cloud is that they no longer need to manage hardware and software. Instead, they need to manage the “wetware”; the people who supply the cloud solution.

You might get rid of Availability and Capacity Management.

But instead you have to replace it with Supplier Management.

And, as many people will tell you, managing people can be far more complex and hazardous than managing technology.

Wednesday, December 21, 2011

IT Production – Cinderella or Ugly Sister?

IT Production – Cinderella or Ugly Sister?

With the pantomime season fast upon us, many IT managers will be dreading the pager call in the middle of Act 2 “IT’s behind you!”, or rather “IT’s not working”. While the rest of us bask in the warm friendship of friends and family, other poor lost souls may well be struggling with a callout on a cold hardware failure, or a stubborn Oracle or SAP system refusing to produce the correct results.

For others, it will be a case of “I can’t come to the ball (for which read: consume massive quantities of alcohol etc. etc.), I’m on call this evening.”

Like it or not, our information-industrial society cannot survive without 24 * 7 IT. And that means 24 * 7 IT Production Support.

Surprisingly, given that for many companies as much as 75% of their IT Budget is spent on Business As Usual (“BAU”) or Production issues, it seems there are too few answers to the question of how to best manage this vital part of the IT landscape.

One approach to addressing these challanges is “MOPS”.

As is common these days, MOPS is a 4-letter acronym. (At which point, it may be worth remarking that there has been a plethora of FLAs – or four-letter acronyms – over the last few years. This is surely another indication of the growth of IT exceeding its “name space”).

The “M” in MOPS stands for “Metrics” and highlights the importance of capturing meaningful data on the Assets and Activity of IT Production. This can include all sorts of data from an Asset Register or small Configuration Management Database (CMDB) to track the growing responsibilities of Production. Also included are Key Performance Indicators to show Service Levels, Incidents and Callouts. Not to be forgotten is the importance of capturing the activity of the Support Teams by means of Timesheets. All of these metrics, properly collected, managed and structured, can go a long way towards helping the IT Production manger with his biggest challenge – explaining and justifying the IT Production costs.

Operational Tools comprises the second letter. Simply put, this part of the acronym highlights the importance of Software to help run Software and Hardware. The range of useful software extends from Backup utilities, Software Distribution, Monitoring, Management, Alerting and Capacity Planning. However, Software Tools in and of themselves would be unable to deliver the levels of efficiency and automation in IT Production unless they are properly integrated. In fact, the cost of IT Management Software is relatively cheap. The real cost of deploying such tools is the implementation and integration within the IT Production organisation. Here, the MOPS approach discusses how to establish a Monitoring and Management Tools Strategy, and how to build a “Referential” approach to Software Tools integration.

As we all know, effective management of IT is impossible without clear and appropriate Processes and Procedures –the “P” of the MOPS acronym. This is where the ITIL framework dovetails into the MOPS approach, since MOPS recommends using ITIL as the basis for defining processes. However, MOPS also identifies other processes such as how to manage Infrastructure Research & Development, how to build engagement processes with the Development projects, and how to build a Standards Governance process. MOPS also recommends using process improvement techniques such as Six-Sigma and LEAN to increase the efficiency of existing process workflows.

Often forgotten in IT Production, the final letter of the acronym (“S”) refers to the importance of IT Production Standards. Here, the MOPS approach discusses the importance of defining Infrastructure Technical Standards and how to define an IT Production Strategy and Architecture role to act as “gatekeeper” to the production estate. How many times in IT Production have we seen applications deployed that are functionally rich from a business perspective, but lacking in supportability, without reference to the IT Production imperatives such as resilience, scalability, backup and recovery capability and the ability to be monitored in Production. MOPS seeks to address this by looking at the definition of “Production supportable”, and setting a Standards process in place to be applied to proposed new application developments.

So MOPS may be one approach that could enable us to get a better level of control over the frenetic world that is IT Production.

So will things change in future? I hope so.

For a long time, the IT Production team was the Cinderella of the story. She was kept away in the dark recesses of the organisation, with few people realising or acknowledging that she was a vital part of the story.

Armed with “MOPS”, she may be able to sweep out some of the cobwebs that have built up over the years, and even be able to take time off for a party or two. Who knows?

I wish you all well in the festive season, whether or not you are on callout.

Thursday, November 24, 2011

Where has IT Quality disappeared?

What happened to the concept of Quality?

In the old days of IT (or Computer Science, as I believe it was once called), the main practitioners were from an engineering background or civil service background. The idea of procedures and processes were pre-eminent.
Such was the importance of computing power that detailed design, walkthrough and analysis was used before people were let loose to cut code, or to implement as system.
This, in turn, led to the importance of processes and controls, and to the so-called Mainframe mentality. It was alleged that it took years for anything to be done on a mainframe, which was one of the justifications for the later Client-Server revolution (remember that?) and the rise of midrange systems (remember them?) which were said to be easier to deploy and implement.
Despite a period when the term Software Engineering was used, the cultural norms have moved towards the world of Agile Programming, Extreme Programming, Prototyping etc.
In the world of Infrastructure, there has been a rush to catch up as the Infrastructure desperately tries to keep pace with the rush of new Applications being thrown over the wall onto it.
In the meantime, we now have chaotic organisations, where management processes (control of change, scheduling of activities, etc.) are equally Agile (or disorganised, depending upon your point of view).
So where are we now?
Some organisations I have seen have Chaotic Programming, Chaotic Infrastructure Management and Chaotic Team and Processes.
Is this the best way to run Information Technology?
Sometimes I think not.
Yes, we must be agile and responsive to the business.
But if we cave in to the pressures for unstructured and ungoverned change we will be creating issues for those that follow us in the IT world.
For many, who live as contractors, this is not an issue; the next contract is around the corner, and someone else can pick up the pieces. Even CIOs are not immune to this short-termism.
Maybe it's time to take a longer term, more responsible view of the way we do things?

Wednesday, March 17, 2010

What Makes you Mad about the current state of the IT Industry?

We recently started a poll on LinkedIn to ask people what makes them mad about the current state of the IT industry. The results so far are insightful, but perhaps not surprising.
Many people sited short-termism and lack of a coherent strategy as a key factors.
Others spoke about how Management (and in particular Management Processes) were preventing innovation and stopping IT from delivering what the Business wanted.
There were also a significant number of comments about the way in which employees are treated within our industry, citing lack of commitment to training etc.
When the poll is closed, I am hoping that we can publish the summary analysis on this blog. We will also share some thoughts at the upcoming Service Desk and IT Support Show which we are attending in April.
In the meantime, this is your opportunity to share a brief rant with your fellow IT professionals. It could be the constant pace of change, the lack of training, the "fire-fighting" culture, the lack of proper processes (or maybe too many?)....
Whatever particularly drives you mad about our industry, feel free to share it!
A famous entrepreneur once said that "every good idea begins with a rant".

Friday, February 12, 2010

Is "Big Bang" a good way to implement Infrastructure?

I was recently involved with a UK client who have just cancelled their Infrastructure deployment project, which was originally going to be implemented as a "Big Bang" deployment.
Part of the reason for this was the underlying risk of such a disruptive deployment.
To put this in context, when I was first brought in to review the project, I realised that the new technology was so disruptive that it would actually be far easier and quicker to have a clean switch-over, rather than trying to incrementally upgrade the infrastructure. My view was that the risk could also be managed.
Since then, the Client's Business has moved on, the risk analysis was reviewed, and they decided to move forward incrementally. This will mean only getting (say) 40% of the improvements in the same timescale. Nevertheless, because of the changed business circumstances this makes sense, and so I supported the change of Strategy.
As an IT Production Consultant, I am generally unhappy with big-bang deployments. I prefer the gradual incremental approach which is more risk-averse, and more in tune with the culture of IT Production generally.
It would be interesting to have comments from other consultants in this area.

Saturday, September 19, 2009

What's in a name? Do we call ourselves Infrastructure and Operations or IT Production?

"Infrastructure and Operations" appears to be a recognised market segment these days. It is a useful descriptive term, since it covers the main aspects of the IT Production role:

  • Infrastructure - looking after the equipment, hardware, software, networking and other technical stuff which modern IT needs to have in order to run day-to-day
  • Operations - the processes and behaviours required to look after the "stuff" (the Infrastructure).

However, I personally prefer the term "IT Production", for a number of reasons. In my view, it ...

  • Is simpler and easier to remember
  • Highlights a logical contrast between "Development" and "Production".
  • Implies a single organisational structure dedicated to a single purpose.
  • Defines a clearly recognisable marketplace for tools and services.
  • Recognises the importance of the "after go-live" part of IT, as a discipline in it's own right.

The last point is the most important: Whilst IT Development enable a business to gain competitive advantage by using technology, it is the IT Production side which actually ensures that the competitive advantage is realised.

One of the areas we are speaking to Gartner about at the moment is the importance of terminology, and the use of IT Production as the recognisable term for what we work in.

Names do mean something. They confer expectations, and status. And IT Production needs to receive the status which it deserves.

Which, of course, means that we have to start delivering to a higher set of expectations.

Saturday, August 29, 2009

Understanding and Managing People who Manage IT

For some time, we have advocated using the four "MOPS" as a means of identifying how to improve the management of IT Production.

However, although these "MOPS" are necessary in order to improve the management of IT Production, they are not sufficient.

IT Production Management is also significantly a people skill. Technical Managers can often benefit from a scientific approach to understanding and managing people.

The use of personality profiling is not particularly common in the IT industry at the moment. However, it may be gaining ground. It can enable IT Managers to ask questions such as “How can I…

  • Improve the way I communicate with my team and peers?
  • Enhance the motivation of my teams?
  • Identify the strengths and weaknesses of a person/team and maximize his/her performance?
  • Best manage a specific person/team?

One such technique is the Birkman Report, an internet-based assessment system. It describes your unique style of leadership - your goals, your approach, what motivates you to lead, and what happens to you under stress.

Armed with this information, the IT Manager is able to develop and refine their leadership skills.

Legal Stuff

The MOPS ™ acronym is trademarked by Dennis Adams Associates Limited. The acronym stands for Metrics, Operational Tools, Processes and Standards, the four foundational requirements for IT Production Management.

Birkman Direct ® is a registered trademark of Birkman International, Inc. Copyright © 1989 – 2002, Birkman International, Inc, Houston, Texas. All rights reserved. Only BIrkman-certified consultants or persons working under the direct supervision of such consultants, are authorised to give you information relating to the BIrkman Report.

Friday, July 24, 2009

How is the economy impacting IT Consulting?

How bad is it really?

We have all heard a lot of feedback over the last few months about how dire the IT consulting market is. I have to say that, at the moment, things don't appear to be as bad as some people are saying. Of course, there is always the effort in trying to get new prospects to part with their money, but that's part of business !

We certainly haven't been inundated with requests for work, but there does appear to be an appetite among some companies to bring in people. Who knows, maybe the worst is over?

Having studied economics (a long time ago !), I am aware that many of the economic indicators are subject to a "lag". In other words, we only know that we have come out of recession about 6 months after it actually happened. The same occurred when we entered recession, as you may recall. We kept getting economics reports saying that we were already facing a crises, and that it had been going on for months.

Consequently, I think that a more accurate indicator of the state of the economy is typically the extent of the "feel good factor". Speaking to CTOs and others, I get the impression that they are feeling more positive, and have more budget to spend than previously.

Meanwhile, it's a case of chasing people to close the next business deal...

Thursday, April 23, 2009

Oracle Buys Sun. A Natural Progression, or Unnatural Mistake?

Can the Software Giant make sense of Hardware?

Java and Solaris are the prize.

The announcement that Oracle will be taking over Sun Microsystems has generated a huge amount of reaction in the blogs and within the IT industry generally. There have been questions about what Oracle's strategy is, what the future will be for Solaris on Sparc, where the free MySQL database lives etc. etc. There have also been been some questions in some minds about Larry Ellison's sanity. It has certainly been a bold move. Some claim to have seen it coming - I certainly did not.

My own interpretation is that Oracle are being opportunist. There was an ailing company - Sun Microsystems - whose heydays in the .COM boom were long gone. They had a huge commitment to R&D without much to show for it. They have some distinctive products (the Sparc chips), some OEM market (Storage from Hitachi), and some very interesting free, or nearly free software and commitment to the Open Source world (Solaris 10, Star Office which forks development in Open Office, and MySQL). They also own the JAVA stack. Maybe they are worth a few billion, even if they don't currently make a profit.

The Profit Motive

And that is the key point. If Oracle is about anything, it is about a business that exploits their assets to make a profit. I suspect that there was no "grand strategy". Charles Wang, the former head of Computer Associates, once said that at the level he worked, people "make it up as we go along". Oracle is driven by a Profit Motive. That, and a hatred of Microsoft. Add to that the fact that IBM and Cisco are circling around Oracle's historical profit levels, and the deal makes sense.

More than just a database company

Oracle has been more than a database product company for many years. They started with the database, but over the last years have positioned the company as an Application Platform. Oracle Financials was one attempt. Then add PeopleSoft, and myriads of other acquisitions. So they have diversified away from the database. If you look at their latest figures, you see that the Oracle database itself is less that 50% of the revenue of the company as a whole. So this is an exercise is diversification. Move up and down the software stack to ensure that you can offer everything the customer could possibly want. All at a profit.

Hardware is not Software, Larry

Oracle has tried to move into the Hardware space before. They created a product called "Raw Iron" which was an embedded hardware product for running the Oracle database. Coincidentally (maybe?) this was based on Sun hardware. There is a very interesting FAQ released by Oracle yesterday which says "Oracle's ownership of two key Sun software assets, Java and Solaris, is expected to provide our customers with significant benefit.". This suggests strongly that Oracle are still seeing Sun as a software vendor. Whilst Oracle have lots of experience in integrating companies, they have always been other software companies. Running a Hardware company is a different thing. The sales model is different, the lead times are different. And you have to ship physical equipment all round the world. It will not be an easy integration.

Predictions

Everyone else is making predictions. Usually, these are based on what the author would do. However, these are my predictions on what Oracle themselves will do. Whether they are accurate predictions, I will leave history to determine. Whether they are good business, that will be the realm of Economics.
ProductPrediction
Java I doubt if Oracle want to upset the Java community. In fact, I suspect that Java will become more open. Oracle's view will be - why do the work ourselves when there are so many willing volunteers to do it for us? Oracle want Java so that they can ensure that all their applications have a good strong application server stack. But watch out for "Oracle Extensions" to the main product.
Solaris Oracle claims that it can now optimize the Oracle database for some of the unique high-end features of Solaris. It has always had this option, but was afraid of "lock-in" to another vendor's product. I predict that there will be some new features of Solaris that Oracle can exploit. But there won't be much. They don't want to alienate the Linux users.
Sparc Chips This is the nub of the question. Oracle have said that they will grow the business. Oracle salesmen may clinch deals by selling integrated hardware alongside the application. They will be able to point to Oracle-specific APIs in Solaris to show performance gains. However, if Fujitsu decide to come calling, I would not put it past them to hand over Sparc development, and OEM the solutions.
Sun Storage If Oracle can sell storage at a marginal profit, they will do so. Particularly if it means software license sales.
Star Office Maybe this is a product that can be stacked against Microsoft. But you have to sell a lot of Star Office licenses to equate to a single Oracle DBMS license. Is it worth it? I suspect that, as with Sun, Star Office will be a sideshow.
MySQL Who cares? The fact is that MySQL generates relatively little profit. So R&D will be cut back. The product will still be there, but will rely on the OpenSource community to develop it. Oracle knows that MySQL is not much of a threat. It will be allowed to stand, or fall, on it's own.
It will be worth watching this one...

Thursday, October 11, 2007

Windows 2008 Server Core - back to the Future (Command Line) ?

Where's my command line manual? Slick move or desperation? The news that Windows 2008 Server will be available in a "cut-down" version appears to be good news from many different aspects, especially for people who will want to run IIS or SQL Servers. Firstly, the ability to de-install complex logic which is not required for the core work (such as the GUI) will reduce the "attack surface" - the number of possible entry points where hackers can gain exploits. The smaller the operating system, the less likely there will be vulnerabilities. Secondly, it must logically simplify the runtime behavior of the OS, and make it easier to maintain and manage This can only be good news for Sys Admins involved with Windows Web and SQL Servers However, there are always concerns which arise. Since the only way to dialog with the Server will be via the new command-line shell, the question arises "How long will it be before this shell exhibits vulnerabilities?" On the face of it, Microsoft have neatly side-stepped questions about vulnerabilities in Internet Explorer, Explorer, Media Player, etc. etc. by the simple expedient of removing it from the build entirely. If you want a server, just get rid of all the non-server pretty stuff. At the same time, it has to be acknowledged that Microsoft do appear to have listened to their customers. Most organizations with Windows Server build their own cut-down deployment version, particularly for the "Edge" or DMZ where the web servers live. Microsoft have just reflected this preference. Although in this case, the amount of "fat" that can be cut out is far more than currently available with Windows 2003. In my view, good Sys Admins are always able to do most of their work from a command line. It sounds as if this will become essential in the future. The circle is complete Back in the old days of NT 3.5, you may remember that the GUI was de-coupled from the core Operating System. At the time, this was a clever approach by Dave and his designers. It fitted well with the fact that NT was inspired by the VMS 32-bit kernel, and enabled the GUI and core development to follow their own lines. I also heard that NT 3.5 systems could run successfully even if the GUI crashed - and I did see a partial example of this back in the early 90's. The problem with this approach was that the overhead of context switching in and out of the GUI every time a window had to be moved, or a box drawn, resulted in a slow Operating System on the desktop. And at the time, Microsoft wanted to maintain the one Operating System for both the Desktop and the Server ranges. With the introduction of the Windows 95-style shell, Microsoft did two things. They re-wrote Windows Server with a 95-style shell, and they made the shell itself a key part of the Server, thus destroying the de-coupling which had been done in the original version. The result was a relatively slim, stable and fast Desktop Operating system. It was also a reasonable Server, as anyone still using NT4 will testify. The problem was that the GUI itself and all it's attendant add-ons (Internet Explorer, for one) resulted in a more bloated OS, which became inefficient and vulnerable to attack. And as the Server market began to ramp up, Microsoft began to question the benefits of keeping a single code base for the Server and Desktop. With the introduction of Windows XP (and later Visa) for the Desktop, it became clear that the code had to be forked. It was time to specialize. So now we are back to the era of slimmed-down, command-line-only servers which only have the software for their own specific purposes. Any extraneous generic functionality is stripped off. Come to think of if, isn't that what SERVERS are really meant to be like ?

Friday, November 03, 2006

Now you can buy SuSE Linux from Microsoft: am I dreaming?

The Microsoft - Novell saga continues Maybe Microsoft will stop selling Operating Systems??? I don't know if you found it a shock announcement, but it certainly confused me. Released on 2nd November is the announcement that Novell and Microsoft have agreed a set of broad business and technical collaboration agreements that will help their customers realize unprecedented choice and flexibility through improved interoperability and manageability between Windows and Linux. There you have it. There is even a picture of Novell Inc. President and CEO Ronald W. Hovsepian and Microsoft CEO Steve Ballmer shaking hands an all smiles. The agreement basically means that you can ask your Microsoft salesman to quote for X copies of SuSE under a reseller agreement. What is happening? Best of enemies / best of friends? There is a saying that you should keep your friends close to you... and your enemies even closer. Well, Microsoft and Novell certainly come into that category. Don't forget that Novell could be credited with creating one of the first practical file / print server, a workstation authentication mechanism, a small systems directory, a ... until these were trumped by Windows NT, and things like Active Directory. Even today, there are people who prefer Netware to AD and think that we should all be running token ring instead of NetBIOS (or whatever it is now called. Don't forget Word Perfect, one of the really good early word processors (under MS-DOS), until Word began to become the de-facto standard. I think the case has been made - Microsoft and Novell have a history of competition. So what does Microsoft do with competition? They either kill them off, or buy them out. So has Steve Ballmer gone soft? Some of the small print... On the technical side, the two companies will set up a facility where engineers will work on enabling co-location of Windows and Linux, using virtualisation technology. In addition, there will be common standards on web services management, interoperability between AD and the Novell Directory, and translators for MS Office XML file format and the Linux OpenDocument format. But examine some of the detail, and you begin to see what is happening. Firstly, the decision to sell SuSE via Microsoft is just a concession. I doubt if MS salesmen will be given priority commission rates on Linux sales. And Mr. Ballmer himself has been quoted as saying " If you've got a new application that you want to instance, I'm going to tell you the right answer is Windows, Windows, Windows." Pretty conclusive. So it appears that Novell will get precious little sales from this agreement. What they do get is time. They have effectively bought off the giant by feeding him some scraps. In return for a percentage of the revenue from every SuSE license they sell, Microsoft has dropped any intellectual property legal actions they may have against Linux users. So Microsoft gets some cash from SuSE sales (whether they contributed to the sale or not, they still get the money), and Novell gets some breathing space. I think Microsoft is the winner here. Novell needs to take advantage of this calm before the storm. They have until 2012 (when the agreement may run out) to build sufficient impetus to be able to stand on their own two feet. Of course, that assumes that the agreement goes full term. Some companies have been known to exit long-term agreements ahead of time...

Thursday, September 14, 2006

Does SOA spell the end of SAP

Is this the end of the SAP Consultant's gravy train?. Want a job as a SAP Consultant? SAP Consultants have always had an interesting life. The challenge centres around the fact that you cannot ever have a generic solution to a specific set of customer requirements. It is all very well persuading people in the Infrastructure world that a generic solution like Tivoli or Unicenter or Patrol will address their needs. Crudely speaking, Infrastructure is Infrastructure. But when you get into Enterprise Resource Planning you are touching key differentiators. Each company's balance sheet and profit and loss accounting is different. Touch that, and you must have a specific solution tailored to each customer. So what does this have to do with SOA? Everything. In the past, this customization has been done by highly skilled (and highly paid) consultants who would crawl all over the organization and require significant amounts business analysis before they could work their magic in customizing SAP. As a result, deployments of SAP could sometimes be measured in years and months, rather than the months and days which other applications may have required. I am not suggesting that there was anything wrong with this. It was simply a by-product of having to customize each deployment of SAP to address the specific business needs of the customer. But now things may begin to change. SOA - the next generation But re-writing SAP to conform to a Service Oriented Architecture (SOA) could potentially (and I use the work "potentially" deliberately) solve this challenge and reduce the time to deployment of SAP solutions. It might also reduce the complexity and deployment challenges, so that SAP could deploy upgrades and enhancements quicker. The SAP approach employs NetWeaver and Web Services, so that all SAP software will be SOA-enabled by 2007 This should be good news for Systems Integrators or Customers. In theory, with a common services interface, it should be easier to customize SAP, and/or integrate it with other Office or Enterprise software products. After all, there are many developers these days that know about Web Services. The recent announcement of SPA Discovery will also help smooth this transition. SAP could become part of a so-called "composite application" containing Business and Financial workflows. According to one Research Group, SAP's decision to use a model-based approach will make it easier to tailor the application to fit the business, not vice-versa. In short, SAP's pragmatic approach may reduce time to deployment and thereby increase the attractiveness of SAP to small and medium size businesses. To do this, SAP may well have to look at it's pricing model. SOA Footnote On a general note, there is also some disillusionment arising about the use of SOA these days. Even David Chappell is quoted as saying that software re-use (one of the key justifications for introducing a SOA) has failed because of the cultural and business barriers. So, for many organizations like SAP, the key benefit of SOA-enabling their software may be to open it up to the customers. This may mean that the age of the specialized SAP customization consultant may be numbered. On the other hand, I seem to remember them saying a similar thing when COBOL was introduced...

Sunday, April 02, 2006

Vista delays could open up Linux on the desktop

Will corporate users finally lose patience?. One more delay too much? The news has broken (not unexpectedly), that the latest copy of Windows - now to be called "Vista" has been delayed, at least until the start of 2007. Will this be a delay that would finally encourage people to move to Linux or other desktop technologies? If such a revolution was going to happen, now is probably one of the better times for it. After all, Linux is (almost) able to be deployed by a non-technical user, and it is backed up by OpenOffice, Thunderbird, FireFox, Samba and other compatible products that enable it to co-exist in the Microsoft world. Will people jump ship following yet another Microsoft delay? Reasons for the revolution It's not just the delays that users are upset about, it's the lack of functional benefits that they will get when / if they deploy Vista. Firstly, what is there in Vista for the average commercial user? The new File System, one of the new features which were touted some time ago, will not make it at all. Basically, it is just a new interface, which is a bit closer to the Apple "glass" interface, with the ability to have semi-transparent "3-D" appearances. Neat, but not exactly revolutionary. And not much, if people are being encouraged to move away from Windows XP. Of course, there is a new programming interface, and the wide use of XML to configure applications and define the interface. Useful for programmers CVs, but not exactly ground-breaking. In fact, when all said and done, there is not a lot of stuff in Vista which the average user will start writing home about. And that is the problem. What will users get for their money? In short, why should they abandon XP? The only real reason is that, one day, Microsoft will stop supporting it, just as they no longer support Windows NT workstations. So, to coin a phrase from Star Wars, "Fear will keep the local systems in check". Will it happen? My view is that the take-up of Vista by Corporates (who, let's face it, are the people who pay Microsoft the big bucks) is likely to be slow. Corporate customers want to see a return on their investment, and a new glossy front-end does not do it for them. There is also the fear of security holes. Just as XP has been locked-down by Service Packs and other patches, the last thing that corporates want is another string of security risks with a new architecture. In fact, Microsoft might get a better take-up if they promoted Vista as just a new shell to XP (just as Windows 2000 was promoted as "Built on NT Technology"). In short, I predict a longer future for XP. Corporates will wait and see. If there are security scares, or performance issues, then you could see an exit from Windows on the Desktop. Who knows?

Tuesday, February 28, 2006

Will Intel-inside-Apple become a corporate standard?

It will be software packages and interoperability that carry the corporation. Apple Inside? The decision by Steve Jobs to ditch Power chips for Intel might have made sense if he had taken it a year ago, but somehow I think he may have missed the boat. Intel, on the other hand, have found another outlet for their entry- and medium- level chips and given a sharp jolt to the anti-Intel camp (which, from what I have seen, appears to be growing daily, with the rise of AMD). So why did Apple suddenly decide to change camps? It's puzzling in some respects. There was an argument that the Intel 32-bit architecture with multiple core chips had a lot more power than Power. Certainly the new Powerbook G4 with Intel Inside has been reported to have better performance. But was the Power so bad? Not really, since you will notice that the battery consumption has dropped off with the new chip. Swings and roundabouts with any architectural design. Winners and Loosers Of course, there are downsides. One of these, which has barely been hinted at, is battery life. The Apple notebooks had a deserved reputation for long battery life. I know of one person who claims to regularly get 5 (five!) hours life from his G4 Powerbook. Not any more... The Intel chips requires lots of juice. So battery life will, like as not, be down in the medium of PC-type notebook standards. Can't have everything, I'm afraid. So what of the future? Some analysts have said that Apple's move to Intel technology is the beginning of a process towards opening the Apple MACOS X operating system to other hardware. How about purchasing a Toshiba or Compaq, and having MACOS X loaded instead of Windows XP? Sounds very tempting. After all, I can run Microsoft Office on MACOS, can still use email, can be authenticated with an LDAP environment, share Folders using Samba. Sounds promising to me. But will it happen? I don't think so. And the reason is to do with Drivers. One of the things that makes Windows XP so pervasive, but which also can lead to instability, is the fact that it works with pretty much any hardware technology you care to name, Dell, HP, IBM, Toshiba, Loveno, Tiny, Sony, ... the list goes on. In order to do this, Microsoft have had to invest in (or persuade vendors to create) device drivers that will work with these technologies. But therein lies the problem. The more drivers, the more complexity, the more likelihood that they will not easily co-exist. What happens when a NIC from one manufacturer has an IRQ conflict with a 17-inch display driver from another manufacturer? Most times, nothing. But the introduction of signed device drivers in Windows 2000 Server was one indication of the extent of the potential problem, at least in Microsoft's mind. Apple, on the other hand, does not have these sort of problems. They have one set of hardware, and that is all the MAC operating system has to work with. Any problems, Apple make both the hardware, firmware and the software. Co-existance is easy. If Apple introduced new hardware support, they would fall foul of the device driver issues that have bedeviled Microsoft these many years (ever since Compaq cloned the IBM PC BIOS, in fact). I don't think Apple want to go there. Whatever it's faults, an Apple is still a single-supplier solution. Incompatibility problems simply don't exist. Long may it continue

Sunday, December 11, 2005

Network Attached Processing - the next big thing for Java ?

Will NAP have the same all-pervading presence that NAS gained? It's not often that a new piece of technology comes along where people are tempted to say "why didn't they think of that before?" Yet I think I have found just such a technology. And if I am right, you will be hearing a lot more about it in 2006. The concept is called Network Attached Processing, and the company in question is Azul Sytems. When Java was first invented, it was decided that software code would not be compiled "natively" to any particular hardware or operating system. Instead, the compiled code (sometimes called "byte code") is designed to run in an environment called a "Java Virtual Machine" (or JVM). The JVM represents an imaginary machine architecture. In order to run Java applications on, say, Linux or Windows, it is necessary have a Java Runtime environment that maps the JVM to the specific target operating system architecture. Once the Java Runtime environment has been written, then all Java programs would be able to run on that target. This architectural approach lies at the heart of the "write once, run anywhere" mantra for Java. Obviously, if the underlying OS and architecture is relatively similar to the JVM, then the Java Runtime will be relatively easy to write, and should be very efficient. Ideally, then, an executing JVM should be located on an Operating System and Architecture that is architecturally similar to the JVM. On the other hand, there is a requirement to be able to run Java applications on Industry-standard Operating systems (e.g. to run Websphere or Weblogic or JBOSS on Solaris Linux). Azul Systems have provided a solution to this challenge. - Network Attached Processing, in the form of an Azul Compute Appliance. The Azul solution consists of a customised JVM or "VM Proxy". This VM Proxy receives incoming byte code for execution and forwards it directly along a network path to the Azul Compute Appliance, with executes it. On the Compute Appliance, the byte code is received by a "VM Engine", which then performs the Java compute operation, before returning control to the calling server's VM Proxy. One obvious advantage of this architecture is that the "client" Operating System still appears to be performing the same functionality as before, running the business Application. However, the actual CPU processing is being off-loaded to the Azul machine. Neat, or what? So what's the Azul Compute Appliance? In a nutshell, it is a 24-way (or up to 384-way!) multi-processing server which is specifically designed to run JVM environments very very efficiently. For example, it handles multi-threading very well (not surprising, if you could have 384 CPUs!), oodles of "heap" memory, highly efficient memory garbage collection etc. etc. So how about the best of both worlds ? Keep your existing application on your old server, hook in a Gigabit Ethernet card, and hang the Azul System off the other end. Better still, have multiple Servers being Compute Served by a single Azul machine. Sounds a bit like a Compute equivalent of NAS ? Yep - you've got it! Once the concept is grasped, all sorts of opportunities arise. Firstly, we are used to purchasing servers to host J2EE environments, based on their computing power. Instead, the host server becomes just a "mount point", a suitable O/S architecture for running the I/O and communications activity. The real processing is done in the "compute farm". What happens if you need additional CPU for this growing application ? No action required - just ensure that the compute farm is powerful enough ! The use of a "Compute Farm" suddenly changes the whole dynamics of Servers in the datacentre. Each Java server could just be a tiny blade (or a Virtualised server), providing it has the O/S and I/O capability for the application. Datacentre management of Servers would be massively simplified with NAP, just as it has been in the storage arena with NAS. Azul Systems web site is at http://www.azulsystems.com. I hear that they have plans to support .NET runtime as well in the near future. Definitely one to watch in 2006.

Tuesday, November 29, 2005

News Review: Ingres Open Source Buyout from CA

Yet another twist in the Ingres saga, as CA float it off independently. But have CA forgotten what they use Ingres for? It seems only yesterday when I was commenting on the decision by CA to Open Source the Ingres relational database (see Ingres Open Source = Graveyard or Second Life ), in July 2004. At the time, I considered that there was significant business logic in the decision. A genuine heavyweight Open Source DBMS would be able to compete with the likes of MySQL, with the ability to handle massive volumes of data. On the business side, revenue would come from commercial support, even though the software itself could be free. Now the world has changed again with the Management buyout of Ingres Corp from CA. The new Ingres Corp I suspect that the writing was on the wall for Ingres as a result of the reorganisation last April, when Ingres became part of the "others" division. Now, however, CA have sold the product and many of the staff to a Silicon Valley private equity firm, which itself has only been going for a year. However, one of the most impressive things about this transaction is the management team, which reads like a "who's who" of database heavyweights: * Dave Dargo (CTO): 15 years Oracle, including responsibility for the Oracle on (OpenSource) Linux programme. * Emma McGratten (Engineering): 11 years running the CA Ingres engineering team. * Andy Allbritten (Support Services): ex-Oracle VP for Support and Services. * Dev Mukherjee (Marketing): ex-Microsoft Servers General manager for Marketing. Ingres Corp is now a company of 100 employees (most of whom are ex-CA, who have moved across to the new organisation), with plans to grow it significantly. In addition the OpenRoad development tools (which are tightly coupled with Ingres itself) and other products like the Enterprise gateway are also included. Even the Follow the Sun support teams are moving. However, OpenRoad will not be Open Sourced yet (try saying that in a hurry!) We are promised that aggressive marketing will start soon. Coincidentally (?) the announcement was made on the same day that Microsoft released the latest version of SQL Server. That is called timing ! Reactions Looking at the Ingres Newsgroups, there is a mixed reaction. One DBA complained that his company was just in the middle of negotiating a move from the earlier closed-source versions of Ingres to the new OpenSource Ingres R3, just as the announcement was made. Others responded positively, since many had complained in the past that CA had not really marketed, or put RD into Ingres. With the new company there will have to be a strong focus. And existing customers should still be supported, often by the same people as before. CA's decision Whilst this appears, in my opinion, to be very good news for Ingres, I am slightly puzzled by why CA have agreed to it: Don't they realise that a huge percentage of their software is deeply reliant on the Ingres database ? Unicenter R11 is due to be launched, which uses Ingres as it's only relational database. Won't the marketing people at CA feel a bit exposed at seeing their products dependent upon a non-CA database? Challenges In the past, one of significant aspects of the Ingres sales proposition was that it was being actively used and promoted within CA itself. Therefore, anyone who bought Ingres would be purchasing a product with a significant locked-in installed base, which would guarantee longevity. Now, things have changed. The "used by CA" tag no longer has the same credibility. Ingres Corp is out on it's own in the wide world, and will have to fight on it's own against the likes of MySQL. The competition will be fierce. Oracle have just recently announced a free (yes - free!) version of Oracle Express for Linux. SQL Server has just seen a major enhancement. And MySQL is beginning to move into the big league, with an improved optimizer, views and triggers. One big question for all "Ingressers" (if indeed that is the correct collective name) is whether the Ingres development programmers and support team will remain with the new company in the medium term. One thing is clear to me, Ingres Corp is unlikely to have the same marketing muscle as Computer Associates. On the other hand, it should have a more focused approach to selling the product. I wish them well.

Saturday, October 29, 2005

News Review: Peregrine finds a home inside HP

The Prodigal Returns News that HP have agreed to purchase Peregrine must put some smiles on the faces of the existing 3,500 ServiceCenter customers. At last they can feel that the software house has a valid home, where it will hopefully get the investment and marketing effort it deserves. Peregrine has had an interesting story getting here. It was quite an acquisition-maker itself in the early part of the decade, including an interesting time "dating" Remedy (now part of BMC). Then it ended up filing under Chapter 11, and seemed to write itself out of the history books. The thing that seems to have saved it is that it has a reasonable product, at a time when every company is trying to get into the ITIL framework, by producing software offerings with the ITSM (or Service Management) strapline. HP's offering in the form of OpenView very much complements this, so I foresee a strong future for both products. The key factor for ITSM offerings is having a common configuration management database. This database should be able to tie together all the assets of the company (Servers, workstations, software licenses and installed applications), and cross-match them to the HelpDesk (so that incidents can be logged against them). This in turn means that Problem Management can drill down into root causes by looking at the Incident history. Then Changes and Releases can be implemented against these assets. So a common CMDB is vital - both for a good ITSM offering, and for a successful deployment of ITIL processes. Peregrine has potential to be such a product, particularly if it is well integrated with OpenView in future releases. One curious question: Why didn't IBM purchase Peregrine ? After all, IBM acts as the channel for a lot of the Peregrine products. And surely IBM would benefit from Peregrine's CMDB. These days, I don't hear much about IBM Tivoli. It used to be the market leader in management of mid-range systems and applications. Now we have BMC, Computer Associates and HP. Are IBM unconcerned about the ITSM market ? Or maybe they are just biding their time.

Thursday, September 29, 2005

News Review: Oracle + Peoplesoft + Siebel

If you can't make it, buy it. The recent announcement that Oracle will be buying Siebel had been fairly widely predicted in some areas of the press. Oracle are paying $5.8 billion for 4,000 customers. I guess this is relatively small money, compared with the $10 billion they paid for PeopleSoft. However, this leaves Oracle with a massive workload to integrate and get value from all their many products and offerings. The Support Challenge If you look through Oracle's acquisitions during the last few years, there is a huge amount of CRM software which they have in their portfolio: * Peoplesoft * Vantive (part of PeopleSoft) * JD Edwards * plus Oracle's own offering * and now Siebel How on earth could Oracle support that many different code lines for just one functional requirement? Some clues about how Oracle might wish to do this are in a recent article in Oracle Scene, the UK Oracle User Group Journal, about Project Fusion. Fusion is the name for Oracle's new Service-Oriented-Architecture (SOA), which is a way of building software applications which promote connectivity between applications. Oracle will need it!. The article explained how this approach would bring together the best of Oracle, PeopleSoft and JD Edwards. Perhaps SOA-enabling all these tools will work. But there is still a lot of (redundant?) code to support. Motivation So why did Oracle buy Siebel? Basically, there are a number of reasons for buying a rival manufacturer in the industry: * Gain technology, to update or improve your own offering * Take out a competitor, to enable you to charge monopoly prices * Defensive action, to consolidate against another, larger rival. Somehow, I don't think Oracle have bought Siebel in order to gain some valuable piece of technology. Instead, I think the second and third reasons are more likely In the case of third reason, the big rival is, of course, SAP. However, Oracle may have a large percentage of the market share of CRM / ERP software, but their offering is fragmented, unfocused, and expensive to support. There is a battle between Oracle and SAP. And any general will tell you that the organization which is able to mass all it's forces against a single point of attack will win the battle. Oracle is in desperate need of a good coordinating strategy.