Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • GTSI Editor 6:08 pm on February 25, 2014 Permalink | Reply
    Tags: , Shared Services, Solutions   

    Public Sector Shared Services and Mobility, the Way Ahead 

    Shrinking federal IT budgets of the past few years have mandated the federal government to come up with new guidance and directives that outline steps to create plans that increase the overall efficiency of information technology.  In the quest to “do more with less”, we now have a host of policies centered around topics such as cloud computing adoption, data center consolidation, program transparency, and shared services.

    Although, it is predicted that cloud computing is going to see its highest level yet of government adoption this fiscal year, one shouldn’t ignore the continued call for shared services.

    Shared Services First

    Shared services were designed to eliminate waste and duplication of IT assets. This agenda first surfaced in 2012 from the Federal CIO in the form of the Federal IT Shared Services Strategy  . In reality, cloud computing and shared services are very closely related. Creating a cloud environment, be it public, hybrid, or private, that offers automated, self-provisioned, and repeatable services, gives you the foundation to provide shared services across an organization.

    One area in particular, that stands to benefit from a shared services strategy, is mobility. (You remember BYOD? It’s all we talked about right before cloud.) The government places mobility under the Commodity IT, category. The directive goes on to state that anything in the Commodity category is an immediate opportunity to build momentum in a shared services strategy. (See figure 1)

    Figure.1 Shared Services Model

    Figure.1 Shared Services Model

    So, with this on the table for more than two years, why have not seen more shared service enterprise mobility infrastructures cropping up? And, I’m not talking about pilot programs. Two contributing factors that come to mind are mobile security, and organizational transformation. Both of these areas demand detailed attention.

    Enter DISA’s DoD-Wide Mobile Strategy.

    Moving into 2014, one large scale program in action that is worth watching, as it may set precedence for future large-scale secure mobile architectures, is the Defense Information Systems Agency (DISA) recent contract to develop and implement a DoD-wide Mobile Device Management (MDM) and Mobile Application Store (MAS).  Slated for completion in mid-2014 (If all goes well) the program will have the capability to potentially support over 600,000 mobile devices.  A DISA press release states that “The establishment of the MDM and MAS is the next major step forward in DOD’s process for building a multi-vendor environment, supporting a diverse selection of devices and operating systems.”

    The Need for Mobility Solutions

    In order to make a project of this size successful, DISA needs to provide the same level of security on their mobile devices that currently exist on the desktop. Beyond that, an end-user experience must be created that is familiar to the user, and provides ease of use. Combining both of these effectively is not an easy task.

    Multi Factor Authentication

    For starters, a good multi-factor authentication verification solution must be established.  This provides authentication based on what you know (password, PIN) with what you have (CAC, PIV card). A third element to consider that can be added is a biometric solution, such as a fingerprint reader or retina scanner.  This type security is critical and backed by policies such as Homeland Security Presidential Directive 12 (HSPD-12), and OMB Memorandum M-11-11.

    Mobile Device Management

    A good MDM solution allows you to gain visibility over any device connecting to your enterprise network, content and resources. You need to have the ability to rapidly enroll devices in your enterprise environment, and update device settings over the network to enforce security policies and compliance. For example, the DISA solution will have the ability to remove applications from an end device remotely.

    End State, a Perfect World

    Eventually, as technology and security continue to progress, I envision a shared mobile environment that can 1.) Ensure mandatory security levels 2.) Present the user with a self-service catalog 3.) Offer an app store that develops can share and contribute code/API’s, and 4.)  Is location transparent (think hybrid cloud).

    The new Federal Shared Services Implementation Guide identifies goals for federal IT teams as they move ahead, including improving return on investment and boosting productivity through use of innovative services and integrated governance processes. The CIO Council advises managers charged with implementing shared services of the importance of getting executive buy-in.

    OMB is encouraging agencies not just to look at the shared service providers for human resources or financial management, but look at consolidating systems or contracts internally too. For example, using Shared Services, the Department of Commerce reduced the number of contracts to buy computers and is now paying 35% less for desktop computers and saving more than $200M on administrative costs. Several state and localities are also benefiting from the government to government shared services as described in the article here.

     

     
  • GTSI Editor 1:29 pm on February 10, 2014 Permalink | Reply
    Tags: , Enterprise, Networks, Switches   

    Enterprise Network Switches – the Hidden Bottleneck 

    A hidden bottleneck is growing within systems deployed on data center floors today. This issue is being driven by current and on-going widespread changes to IT system designs driven by such factors as virtualization, modifications to storage networking and the needs of Big Data systems. This requirement is the need for core network port capabilities to be increased to take advantage of 40 and 100 Gigabit Ethernet (GbE) interfaces that are now routinely available on most enterprise class network switches.

    John Burdette Gage, in 1984 while an employee of Sun Microsystems, is credited with creating the phrase “the network is the computer. Now, 30 years later, the primary focus for many IT initiatives should still be “It’s all about the network”.

    Bandwidth requirements for virtualized servers are one reason why serious consideration has to be given to upgrading network core infrastructure.

     US Consumer BandwidthAs an example, you can find the reference architecture for movement of virtual machines in a virtualized environment here.

    A virtualized infrastructure needs plenty of network bandwidth. With 5, 10 or 20 servers on average being virtualized into one physical server, and each virtual server requiring, at a minimum, 1 GbE bandwidth, the typical physical server needs multiple physical 10 GbE interfaces.

    Capabilities such as live migration due to load-balancing or other requirements can add additional network burden to a physical server, especially since these moves will normally occur during periods of peak activity.

    As an example, a typical VM host with that is 4 gigabytes in size needs to be migrated from one physical host to another. Utilizing a 1 GbE link could take 20 minutes or more to migrate the server (which because of such factors as bandwidth limiting, CPU utilization and de-caching of RAM can have an adverse effect on other VM’s on the physical host, this impact could increase response times by a factor of 2 or more). Using a 10 GbE interface can reduce the period of performance degradation of all other VM’s on the same Physical host by 200 percent or more. In addition, having additional available bandwidth will greatly reduce the risk of VM host failure or reset due to network port saturation.

    With the increased proliferation of 10 GbE interfaces on servers and other devices generation of blocking issues within the bandwidth of the data center environment are occurring. The ratio of 10 GbE interfaces on core network switches to 10 GbE on other devices within the data center rapidly falls below a one to one ration witch results in increased network collisions, decreased response times due to bandwidth unavailability and increased likelihood of network failure due to timeout issues.

    Changes to Storage Area Networks are also causing bandwidth issues within core networks.

    Storage installed base by interfaceThe following chart, though a little dated shows the growth of iSCSI and NAS based storage solutions when compared to fiber SAN:

    Storage networks that have sufficient bandwidth to not degrade response times are also becoming an issue as conversion from Fiber to iSCSI Storage Area Networks (SAN’s) occurs. The primary motivating factor that is driving this conversion is cost. On a port per port basis, when you compare overall bandwidth (as an example comparing the cost of a 8 Gbit Fiber port to a 10 GbE port) the cost of fiber ports, on a purely bandwidth basis, can be two to three times as expensive.

    There are both pro’s and con’s to utilizing iSCSI as the primary transport mechanism for SAN’s. One of the biggest impacts to an existing environment is that without the expansion of existing core network adding the bandwidth required for a SAN architect can oversubscribe existing networks. In a majority of cases, this oversubscription will likely result in a direct to the overall network’s response times.

    As our customer base has moved from a standard business day (5/9) operational/productions environment to a 7/24 one, anything that increases network response time is detrimental to growth and user satisfaction.

    Use of large-scale analytical engines, such as Big Data, is causing over burdening of core networks

    The recent growth in the number and types of analytical systems, such as Big Data, have become a source of competition for bandwidth and as a result for most networks a causal component for lower network performance and application response times.

    A typical Big Data system, during the start to finish related processes of raw data to viable information cycle, with transfer large quantities of data multiple times between storage systems and servers. Data is initially collected from raw sources, transformed into queries in a database or database like formant, and then assessed from the database using specified queries. The outputs of these queries are then formatted into some type of user accessible data (such as a report) which can then be responded to. This process of moving, transforming, querying and reporting all takes bandwidth and in many cases 3, 4 times or more the bandwidth needed by average database systems today.

    Data is growing exponentiallyAll of the above influencers; virtualization, iSCSI SAN’s and “Big Data” systems clearly requires significant bandwidth at the core enterprise level to facilitate functional operations and performance as well as meet output requirements/expectations.

    The core of the problem is that the switches within most data centers are not designed to handle the recent and on-going proliferation of 10 GbE interfaces at the standard server level. Root cause analyses of current issues within many locations indicate that 10 GbE interfaces are no longer sufficient to support daily operations. Serious consideration has to be given to the use/upgrade of existing core switches to use 40 and 100 GbE ports as core interconnects.  Production requirements continue to expanding in complexity, scale, and bandwidth. Without upgrading the ports within a datacenter’s core network to these speeds application will either experience or are experiencing operational impact/performance degradation. Here is a whitepaper where you can find additional information on the recommendations of a large network vendor recommends dealing with bandwidth backbone needs.

     
c
compose new post
j
next post/next comment
k
previous post/previous comment
r
reply
e
edit
o
show/hide comments
t
go to top
l
go to login
h
show/hide help
shift + esc
cancel