System Interconnect Improvements: Don’t Forget the Network



 
 

Editor’s Note: We continue our Smarter Computing Breakthroughs series this week with a post on System Interconnect Improvements from Inder Gopal, Vice President for Systems Networking Development in IBM Systems and Technology Group. The Breakthroughs Series will introduce you to key technological developments IBM has advanced to strengthen our integrated portfolio of systems, software and services – technologies that are often the unsung drivers behind the IT infrastructure that enables a smarter planet. You can find links to previous Breakthroughs posts at the bottom of this post.


How can you make the most of network resources?

Data center optimization often focuses on applications and systems. But if you really want to optimize service delivery, it’s important to keep in mind how critical network resources are as well.

Consider virtualization, for instance. Virtualization gives organizations extraordinary power to do things they’ve never done before, like automatically migrate entire servers from host A to host B, to get more performance when they need it.

But is the network context always taken into account in this scenario? If host B doesn’t have the bandwidth it needs, its potential to improve service responsiveness, and satisfy customers or end users, will be compromised — limited by an unnecessary networking bottleneck.

Recent advancements in network technology can, of course, help. Really getting best value from them, though, will often require that organizations adopt a new approach.

Convergence comes to mind. Historically, many organizations have used fundamentally different technologies to link different classes of data center assets — ex: Fibre Channel fabric for storage and Ethernet for systems. But this double approach also means that there are more total network assets to manage, higher costs and less flexibility. And because the whole infrastructure is more complex, it’s also slower to adapt for any given purpose — like creating and deploying a new service that will hopefully satisfy customers in new ways, and lead to new business growth.

Moving to a converged solution can help address these issues. This is because one network is fundamentally simpler, easier and less expensive to manage and optimize over time than two. If you can accomplish everything you need to accomplish on one, you probably should (and indeed this has been a major factor driving the business success of Voice over IP for the last decade).

Converge your way to a more flexible, scalable data center

That said, making data center network convergence a practical reality, and realizing all that theoretical potential, also means taking into account many factors.

One would be the organization’s level of investment in its current networking infrastructure. If an organization is completely new to the world, like a startup, it has no investment in network infrastructure. It can simply purchase and deploy the latest and greatest. But most organizations have a deep investment they’re not at all eager to abandon. They’ll want to hear something other than “Let’s rip out everything you have and replace it with something else” — instead, they’re going to want to replace as little as possible, while creating as much new value as they can.

The details of convergence implementation are another. Are we just talking about managing network bandwidth over both Ethernet and Fibre Channel? Are we talking about unified physical switches and cabling? Are we talking about virtual switches that migrate across an infrastructure along with virtual servers and preserve their business policies as they go? How about Fibre Channel over Ethernet? The specific form of convergence (and how it’s likely to change in the near term) has to be taken into account.

And while we’re on the subject of change, there’s the fact that organizations will typically want to future-proof their network against network issues as much as they can. Maybe today a given service doesn’t need X level of bandwidth — but will it tomorrow? That’s very hard to predict (and will be until somebody comes out with a practical crystal ball). So it’s important that new network investment delivers real flexibility and scalability — essentially transforming network bandwidth into a fluid resource, like memory inside a server, that can be allocated whenever and wherever it’s needed, in as close to real time as possible.

Actually, even real-time reaction may not be good enough in many cases. Instead of reacting to changes in demand for different services, and assigning bandwidth correspondingly, ideally organizations would be able to act proactively. That means anticipating performance needs relating to network/fabric elements, based on changing workloads, and taking action to support those needs before a performance shortfall can occur.

And, of course, if you’re really going to treat bandwidth as a pool of resources, it’s important that the pool be big enough to support future requirements. Unfortunately, at a practical level, this goal can be compromised by switch limitations. A network switch intended for a given blade server chassis, for example, needs to be able to supply more bandwidth than that chassis typically requires on a day-to-day basis at present. So IT switch vendors that offer exceptional total bandwidth obviously have a major competitive strength.

Management complexity is also a major problem to take into account — and that’s becoming clearer and clearer as data center architectures get more sophisticated. An organization that wants to move to a private cloud, for instance, is going to need a simple, straightforward way to manage that cloud’s network resources through business policies. Otherwise, the cloud is going to require a lot of manual oversight and adjustment, which takes away a lot of the appeal the cloud had in the first place.

IBM leads the industry in interconnect performance and convergence capabilities

IBM’s portfolio of network/fabric interconnect solutions has been developed with all these factors in mind — and many others besides. Our technology, which is built on open standards for maximum interoperability, minimizes rip-and-replace no matter what your current infrastructure or choice of network vendors may be.

Our virtual fabric solutions are also ideal for complex virtualized architectures, including private clouds, because they support virtual ports and virtual bandwidth management — essentially, transforming bandwidth into the fluid, poolable resource it needs to be for best results.

And, of course, we can help you achieve Ethernet/Fibre Channel convergence no matter how you define it or implement it — all the way up to a single unified network for all data center resources, if that’s your goal. This way you can really minimize network costs and complexity, while maximizing performance, flexibility, scalability and service availability.

Visit www.ibm.com/networking to learn more. Follow us on Twitter @IBMSysNet.

Previous Smarter Computing Breakthroughs posts:

  1. Middleware Optimized Systems
  2. Information Integration, Pt. 1
  3. Information Integration, Pt. 2
  4. Unified Management
  5. Data Security
  6. Image Management
  7. Cross-Platform Virtualization
  8. SMP Interconnect Fabric — Simpler, Faster, Smarter Scalability
  9. Intelligent Threads: Tuning the POWER 7 Processor to your Workloads

 

 
 
Smarter Computing Analyst Paper - HurwitzTo effectively compete in today’s changing world, it is essential that companies leverage innovative technology to differentiate from competitors. Learn how you can do that and more in the Smarter Computing Analyst Paper from Hurwitz and Associates.

Subscribe to the Smarter Computing Blog

Recent Posts

Enabling progress through cloud, mobile and analytics

Deon Newman

It is an exciting time for mainframe computing, with its built-for-cloud architecture, a foundation that enables cost efficient and secure mobile transactions and the ultimate analytics engine for instant insight. Several exciting announcements came out of the Mainframe50 event.

Continue reading

Storage virtualization: What, why and how?

Shamim Hossain

The success of an IT organization today depends on an efficient and cost-effective solution, and with the proliferation and adoption of virtualization in today’s IT landscape, many people are asking three big questions about storage virtualization.

Continue reading

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>