Discussions

News: Corporate Data Centers, a soon to be extinct species?

  1. A new article by Billy Newport gives an original prediction for the future of application and data center hosting. The article shows the 'IT-less' corporation will emerge, and how J2EE, Linux, and mainframes or large partitionable Unix servers will play a key role in the 'IT-less' corporation.

    Read Corporate Data Centers, a soon to be extinct species? on TheServerSide.com.

    Threaded Messages (25)

  2. The only bad thing about this article is the extensive use of acronyms without explaining them. This is common writing practice.

    Christian Rauh
  3. I apologize, I'll go through the article and post on this thread a 'Glossary' or maybe Floyd could attach the glossary to the document trailer.

    Thanks
    Billy
  4. Billy,

    I know that bandwith looks like it won't be a problem, and someone can even see your article as a vision that will boost Cisco stock, but have you considered the speed of light problem when you move all the data centers out of every enterprise ?

    What will be the user interfaces ?
    Do we have a platform for UI ?
    You know that many IT managers will be reluctant to give up the MS Outlook in favor of a Yahoo or Hotmail style.

    "A cost effective wan can be built using a thin client solution like Citrix."
    Really ?
    I have seen that in production.
    It doesn't move very well, trust me.


    Cheers,
    Costin
  5. Cotzin,
    The speed of light is important if there was only one data center but they will be and are distributed so that they are close to most major business centers for exactly that reason.

    It's not just CISCO, it's companies like EMC, Compaq (Storageworks and servers), IBM (storage and servers), Sun (servers and a limited storage story) and anyone in the fiber switch business (Brocade etc).

    Nothing changes for GUIs. I'm not talking about adopting a web front end, the same applications will be used.

    We used Citrix for both Unix and Windows and it works fine even on low bandwidth connections for most windows apps and especially office/note/outlook.
  6. We used Citrix for both Unix and Windows and it works fine

    > even on low bandwidth connections for most windows apps
    > and especially office/note/outlook.
    Just curious, how many Citrix clients did you connect to the Citrix server?

    In my company we want to use it to simplify the desktop configuration of the desktops be we have some questions about scalibility (we are talking about >10.000 client workstations, in over 200 locations). For browser based apps (meaning apps which were desgined and build as a browser based app) I think the centralization scenario will work very well and will simpify the desktop configuration. But our end users also want to use applications like Office. These kind of applications were not designed to run masively under a Terminal/Server citrix archicture and I forsee a huge, huge, huge Terminal/Server farm in the datacenter when deploying Office under TS/Citrix.

    At current state of technology, using TS/Citrix, do u think it will be possible to massively deploy office like applications when "Nothing changes for GUIs" and "the same applications will be used"?
  7. Well,
    You should really talk to Citrix about sizing concerns, it obviously depends on a lot of factors. I'm sure with a deployment such as you describe, they will only be too glad to assist especially given the current environment.

    But, for applications like office it appeared to be a CPU scaling problem, the Win32 applications can share a lot of the memory required amongst each other.

    Where I have seen big problems are with Java applications using Citrix. They are use a lot of CPU, they use a LOT of memory and this memory is not shared between processes so memory requirements can be a problem.

    But, I think you may be surprised how many office/outlook users can use a NT/2K server with Citrix.
  8. SOAP with EJB[ Go to top ]

    Hi
    I am trying to make EJB method invocations through SOAP but the problem is that the methods with non-String parameters fail giving a deserialization error.
    I read your article on SOAP and EJB and thought you might answer my question.
    Thanks in advance.

    Regards
    Amit
  9. Good article - I definitely agree with the vision. I do disagree with whether or not the J2EE even matters. To me, web services is a much more important standard in this sense. If I have an application and deploy it as a web service, then I really don't care what the platform is...

    Still web services will not be the answer to everything.

    Thanks!

    Damian
  10. Unfortunately, this isn't quite the reality.

    J2EE is important as it lets you write server applications/services in a platform neutral fashion. Hence, you can deploy on the lower cost hardware platform that meets your SLA.

    Web Services as it is currently defined is just like an advanced message broker. It lets you make XML RPCs and you can locate the servers hosting RPC using a directory, UDDI.

    It does not provide on platform on which you can develop such applications, it merely provides glue between such applications. This means that J2EE and .Net are probably the two main platforms on which Web Services will be built.

    Billy
  11. Of course it's not a reality - neither is J2EE on a broad scale, .NET at all, and, broadly speaking, the outsourced world that you've outlined.

    I'm not sure where you get the statement "Hence, you can deploy on the lower cost hardware platform that meets your SLA." J2EE is very expensive to deploy and will remain that way for some time.

    Given this, I'm not sure what you see as the benefit of J2EE in this model. To me, the most important thing is the guaranteed high speed line (after that, it's support of the application). That's the only way the thing works. From there, I need to be able to access my applications. These applications will be written in a wide variety of languages and run on many platforms. My Siebel system runs on NT. Is there a benefit to outsourcing it? Yes. Does it matter that it isn't J2EE? No.

    Therefore, while the idea of being platform neutral is compelling from a programming standpoint, it may not matter much when it comes to the business side. And, if J2EE servers remain at $10k a CPU, then the benefit of running platform neutral is significantly reduced.

    Web services will be important. J2EE and .NET will compete on the platform, but when you can create applications that abstract to the same level, then the language and platform that you write the service in becomes less important. Why do I care if the service I use is written on .NET or J2EE? I won't if the application fulfills the SLA and the functionality.

    My point is this: you're right, the outsourced data center is coming. High speed lines will make the biggest impact on when this happens. J2EE may help with this, but web services will also help. How? By abstracting J2EE to the service level, we get distributed computing across platforms - not just components or objects. This will lower costs by not requiring the monolithic application to reside even in one datacenter.

    Imagine this world: Siebel licenses you software - in the form of services. You construct an application that lives in your datacenter, but most of the application works via services. Your datacenter monitors the application to confirm Siebel's SLA. Siebel's application code lives at a different datacenter somewhere else.

    Is this a dream? Yes....but I think it's a pretty compelling one...
  12. Damian,
    We're in complete agreement. I indicated in the article that companies would just request an application be made available at a given SLA. Siebel on NT is a great example as would any of the ERP/CRM applications, SAP etc. Web Sites, B2C sites, Email.

    So long as the outsourcing vendor can provide the application at a given level of service, the platform is irrelevant.

    The points about J2EE or Linux are simply that companies will probably still develop custom applications. Those applications need to be portable in order to get the lowest cost platform. J2EE and Linux are just two ways of achieving this portability, that's all. The Linux API may be more important given the amount of off the shelf software that's available off the shelf that is not written in Java.

    Your idea of connecting these applications up using Web Services puts Web Services in the role I see, an integration layer. You can view this as a platform if you want. The next article will try to detail what such an integration platform needs to do this, I call this piece of middleware a "Service Broker".

    Thanks
    Billy
  13. Billy,

    After I took time to read it fully, I think it is a nice vision.
    In the end we'll all arrive where technical progres will lead us.

    But clearly at the current times, we're quite a distance away from such a thing, and for example you cannot reasonably predict when technologies will be in place.
    The current technologies I'm affraid can't support your vision, there are only just a few pieces in a puzzle.

    In the meantime, technological progress, may lead us someplace else.

    I guess after what happened last year technical progress will be much more business driven.

    For example Internet usage has dropped, margins for data communications have dropped, who do you think will take the risk to invest in high-capacity networks ?

    Remember your vison was more or less shared by Oracle, IBM and Sun wiht Network Computer, Java Station, JavaOS.
    Quite a few billions went down the drain.

    And one of the big players - you know who - has a huge interest for this vison not to happen.

    So things are not very simple ...
  14. Costin,
    I never said we could do it now but we're pretty close at the moment.

    As for bandwidth, I saw a report recently saying that we're 70% overbuilt at the moment, that has to be good for pricing. The investment already took place last year. You can now do most of this approach in large cities in the USA. There are companies who will run fiber from their SAN farm to your building (I'm trying to find the url!) and of course, companies like IBM or CSC would only be too happy to provide most of these benefits to companies.

    Smaller companies are also doing this. Look at companies like NetLedger or other ASPs providing off the shelf web applications handling consulting businesses or email/pim services. Sourceforge is another example. They are much cheaper both in startup costs and ongoing costs and also deliver higher quality than doing it yourself.

    If IT becomes business driven then this scenario is even more likely to play out. Outsourcing is a pretty normal scenario when belts are tight and costs need to be controlled.

    For Web Services to really take off then I think the implementation and integration costs need to come down. That means these systems should work off the shelf with existing applications and that probably means that core applications need to be commoditized so that this integration cost can be reduced. Funnily enough, the companies that use ASPs will probably be the initial swarm of companies to be able to implement web services cost effectively and they will probably be smaller/medium size companies as they are more likely to settle for off the shelf rather than build your own and hence by using off the shelf software or an ASP, the suite is probably easier to integrate and web service enable.

    This article is a slightly different spin that the network computer. It didn't talk much about light, thin applications etc, it mostly talked about server or application hosting. It also mentioned desktop issues and the Citrix type solutions are the most cost effective way to support 'normal' users. Developers or people who change their environment, need to install software regularly would still keep their own PCs.

    Billy
  15. Glossary Added[ Go to top ]

    As promised, a glossary is now at the end of the article and we'll try to make this standard practise from now on.
  16. Example of such a deal.[ Go to top ]

    Exodus have just announced a deal where they will host covisint, an auto B2B marketplace. This press release mentions many of the reasons why they choose to outsource.
  17. One of the interesting assumptions in this thread is the mention of ASPs and the associated SLAs. As we all know ASPs will easily put together an SLA to support various application servers from Broadvision to Websphere. The reality check comes in when things go wrong.
        The amount of application specific configuration that needs to be done for these systems (such as EJB pools etc) means that most of the ASPs just do not have the expertise to manage what is for most Internet based orgs, their life-blood.
        While I would agree with what Billy says, I think there are some very fundamental, non-technical but technically associated reasons why corporate data centres are not going away very soon. I have yet to see an ASP engage to the level of SLA that can lead to total outsourcing. In fact, the amount of training that the application developers may have to do with the ASP, may just make it cost-ineffective for both parties...
  18. Tony,
    As for the level of admin/tuning/know-how for application servers etc then companies like LoudCloud look like a good acquisition target for a large data center company, no? 350M? If their technology lets you police/monitor a system to better meet SLAs then it may be worth buying it.

    Most large companies probably have seperate dev/production staff in any case so developers have to train somebody or make the application so that it meets the production infrastructure requirements (deployment/administration/tracing etc).

    I'm hoping that JMX will mature to the point where it makes this aspect easier for developers. I hope the J2EE vendors will do something with JMX for developers rather than what BEA has done right now for example. Yes, their console uses JMX but when will it be documented for developers to use and when will they start making JMX containers that support Tivoli etc?

    Most of the pain with outsourcing can also be felt when you make a seperate dept responsible for production, it's a similar deal and can be just as contentious as outsourcing sometimes. Making these requirements and getting the developers and the middleware products to meet them is the hardest part. It an be very frustrating if developers are used to controlling the show when they move to such an environment.

    But, you're right, it's a big shift in thinking and organisation.

  19. Hi Billy,

    I agree with the vision in general but with several business and technology reservations.

    1. Organizational issues. Users like to feel that they can point to one single person who is responsible for "keeping the network up". If that person (usually an SA) screws up frequently, they're usually replaced. With an outsourced infrastructure, far away events can cause havoc with mission critical systems. An systems administrator in California can accidentally trip a wire and bring down bond traders in New York. Accountability is lost in this case.


    2. Technology issues: Even if you gave root access to every SA from every client whose applications ran on a particular machine, you would have issues around audit trails and reboots. Even if these two applications are from the same company, you would have issues surrounding who had final control over the box. You could run things chrooted and in separate environments, but it would not stop the finger pointing. Now, it maybe that cost considerations finally move some groups to consider shared resources.

    3. Security: This is not a major issue (on the WAN side only) in my mind, since there are already protocol encryption mechanisms that people use. WANs are also already outsourced for most companies and have been for a while (AT&T and MCI do a roaring business in this world).
  20. Thanks for the points Amit.

    1. Trust is the biggest problem. There is a big shortage of skilled people who can run such systems and know what they are doing. Current providers can handle simple configurations but it's when you ask for more advanced topologies that you seperate the men from the boys/women from the girls. This needs to be addressed by the center providers.

    2. Thats where partitionable Unix systems come in. You can give each user on the box his/her own virtual unix system on the box and all they can do is reboot their partition but the other partitions are unaffected by their actions. Mainframes are still the best way to do this sort of virtual machines. Unixs boxes are fine grained enough yet.

    3. The WAN is a readily solved problem with VPNs etc but security on storage and servers is another thing. The level of 'paranoidness' of security people should not be underestimated and they are probably right to be so. I've seen suggestions of running the DMZ and intranet portions of a web app in two partitions on a Sun E10K shot down by security people because someone could 'hack' through Solaris across partitions on a single box so they used seperate smaller boxes. Maybe the storage should be encrypted also so that it's only accessible from the partitions that need access.

    Thanks for the comments.
    Billy
  21. In the mid nineties I was consulting for a large retail bank in the UK. 3,000 branches and 30,000 workstations.

    All banking applications were n-tiered, with the central services provided to us from a mainframe with all client applications written in Windoze. This was a time when distributed transactions and cheap boxes were the fashionable thing.

    The mainframe however, provided us with exactly what you say, cetralised support, centralised maintenance, backup, two datacentres to provide for failover, and if we needed more databases etc, we only needed to ask. We didn't need distributed transactions etc.

    Applications such as MS Office were still installed and maintained separately through Machine build configurations. Considering this, I can't see why this cannot happen today.

    The only uncertainty is about the large applications such as Office, but otherwise this should be happening today.

  22. Nitin,
    I agree, we seem to be going in circles. First we had mainframes, then a rebellion to having a lots of Unix servers running databases with fat PC clients. Then we went N tiered, then we went thin client, and now we're consolidating these Unix servers back to larger centralized Unix servers with very thin clients running on network computers (PC's running Citrix/Browsers or dedicated network computer boxes).

    People paying for IT who have been around must be scratching their heads thinking 'Why did I just spend X billion bucks to get back to what I had originally????'.

    People who stuck it out with mainframes must be smiling quietly...
  23. I have a few comments about Billy's document and the discussion that followed in this thread:

    * It should be notted that Billy's reference to IT seems to be limited to the operation of the data centres. In a more general view, IT's primary role is the appropriate use of technology to assist business. Managing the servers and network is just a subset of that role.

    * When considering this more general view of IT, it becomes very clear that we're are not moving towards 'IT-less', just a different approach to applying technology. Some might even argue that the role of IT has never been more important to a company's survival.

    * In the scenario(s) described by Billy, the IT infrastructure becomes a commodity to the CTO, where more than one vendor can compete for the same service. This allows him/her to concentrate on the true value-add of the new IT structure, which is to define, select and configure the application suite (custom or off-the-shelf) that best serves the business requirements.

    * From this perspective, we have come a long way from the early mainframe-centric IT of the 70's and 80's. Applications systems are now 'encouraged' to conform to industry standard protocols and architectures, that promote and ensure a level of inter-operability that were unfeasible only a few years ago.

    * Although it is still possible to design monolithic solutions to be hosted in almost any platform (Win32, Unix or MVS), experience has taught us that companies need to enhance their system's flexibilities while reducing exposure to a single part of their IT domain. The dissemination of service-based architectures and the standardisation of high-level service protocols (WSDL, UDDI, ebXML, etc...) are providing us with a new bag of tricks to design, smaller, well defined business service components.

    * J2EE is a fundamental key to this evolution of application systems and as more companies and software vendors see the benefits of re-thinking the way they design system, we will reach that critical mass of availability of re-usable services that we have been waiting for.

    * IMHO, it is at this point that all the investment made into J2EE will bear its full potential. At the same time, anyone still bound to large, costly and complex applications, whether on mainframe or Unix, will be left with a severe handicap
  24. Right on the money,
    I didn't mean companies no longer needed IT, I meant that IT would become more of commodity and outsourced.

    Billy
  25. Billy,

    As to the capability of Citrix, it is not Citrix to blame but the architecture of Windows, so if a user changes the page size of a 500 pages word doc with images, the whole server will stall for a moment.

    Windows/Citrix is really not scaleable for the moment, maybe in the near future.

    And the service level agreement, that's nice in theory, but can't be enforced with actual technology.

    You just can't have a minimum quality of service guarantee as yet.

    Yes, a start has to be done and it has been done, some of ASPs went bankrupt others are doing OK.
    But I think we still have a way to go until corporate data centers will go outside.

    I worked for a company where I had to get my email from an Exchange folder on the other coast, through VPN over fiber.

    I sure didn't like it.
    So my point is that if you're in an ASP business you have to really be careful what you promise to your customer.

    Hopefully, the things will move on in one to a few years.

  26. Totally agree with you about your points. In terms of integration players on the web services field, Bowstreet, in my opinion, is the clear leader. Obviously, MS and Sun will both come after them - but basically they have made a system for dynamically requesting and delivering services, both on an internal website and as an external service integrated into a client's website.

    One final note from the WSJ this morning: according to them, 97% of fiber is unused. That means prices will come down and this vision becomes more and more a reality.