NSF Grant Proposal for vBNS Connectivity

NSF "Connections to the Internet" Program (NSF 96-64)
Kansas State University
January 31, 1998

A. Project Summary

While collaboration with colleagues at other research and education institutions in the state, nation, and around the world has always been important to researchers at Kansas State University, the need for real-time interactive collaborations and access to remote facilities over communications networks is now considered an essential part of research efforts in nearly every field of study. However, activities central to these contemporary collaborative research projects, such as transfer of extremely large data files, access to remote supercomputer facilities, distributed parallel computations, visualization, real-time collaboration tools, and video-conferencing, have been at best prohibitively slow or entirely impossible over the commodity Internet. By connecting to the National Science Foundation's very high speed Backbone Network Service (vBNS) through the "Connections to the Internet" program (NSF96-64), K-State intends to tear down this road block and partner with other vBNS and Internet2 sites to dramatically advance knowledge not only in high performance computing and networking, but in science and the arts as well.

Meritorious research projects at K-State with significant and specific high-bandwidth and/or bounded latency requirements for network communications with remote sites have been identified in the following areas:

In addition, nearly 20 other research projects in a wide variety of disciplines will likewise benefit from the improved network communications. However, the benefit does not stop there. In order to connect the meritorious researchers' servers and desktops to the vBNS, K-State has made significant improvements to its network infrastructure and will dramatically enhance wide area network communications as a result of this vBNS project. These improvements will benefit all users of the K-State computing and network facilities - not only students, faculty, and staff directly associated with the university, but also the countless people reached by K-State's emphasis on distance learning and its mission of outreach as one of the nation's first land grant institutions.

These network enhancements will involve four components: the local campus infrastructure, the state-wide Kansas Research and Education Network (KANREN), the regional Great Plains Network (GPN) consortium, and the portion for which NSF funding is sought - the connection to the vBNS. At the local level, the core network will be enhanced to connect more buildings with 100 Mbps full duplex ethernet. Within buildings, network electronics will be upgraded and new wiring installed where needed. At the state level, the KANREN backbone will be upgraded from a single T1 to ATM over a DS-3 to connect K-State with the regional GPN GigaPoP to be located in Kansas City and operational by August 1, 1998. The GPN is a consortium of institutions of higher education in six states in the great plains region: North Dakota, South Dakota, Nebraska, Kansas, Oklahoma, and Arkansas. The GPN will not only enhance the network communications among its member institutions, but the GPN GigaPoP will also serve as a regional aggregation point for connecting to the vBNS. All institutions in the GPN that receive a vBNS grant from the NSF, possibly along with the University of Missouri, will cooperate to design an appropriately sized and managed connection to the vBNS out of the Kansas City GigaPoP routing node.

With these network improvements and cooperative efforts, K-State will take its place among the leaders not only in research but in bringing next generation network technologies to the nation and to the world.

C. Project Description

C.1 Introduction

Kansas State University proposes to connect its campus network to NSF's very high speed Backbone Network Service (vBNS) as part of the "Connections to the Internet" program (NSF96-64). K-State researchers have long been involved in collaborations with colleagues at other research and education institutions, but have been severely hindered by the limitations of wide area network connectivity over the commodity Internet. Transfers of multi-gigabyte to terabyte sized data files, synchronization of distributed computations, visualization, video conferencing, and even at times a simple telnet session to a remote supercomputer facility have proven to be at best prohibitively slow, if not impossible. Access to high-speed low-latency vBNS connections to other institutions is essential to the future success of and would greatly accelerate the progress of numerous research projects at K-State.

As one of the first land grant institutions, K-State has a 135-year history of teaching, research, and outreach. In fiscal year 1997, K-State's total research funding base was $82.4 million. Extramural research funding reached $52.3 million, of which 68 National Science Foundation grants totaled $6.9 million. In 1997, the university received one of only 10 of the National Science Foundation's Recognition Awards for the Integration of Education. The RAIRE award cited K-State's strong collaborations between its research scientists, education faculty and teachers to put modern research techniques and concepts into K-12 classrooms. This, along with strong collaborations within the state's scientific community due to Kansas being an EPSCoR state, positions K-State to quickly and effectively share the results of its meritorious vBNS-supported projects and thereby contribute to the emerging national and global high performance computing and communications infrastructure.

K-State's proposed project to connect to the vBNS involves five components described in the following sections. First, meritorious research applications with high bandwidth and/or bounded latency requirements are described. Secondly, the local campus network infrastructure must reach the researchers' servers and desktops with guaranteed levels of service adequate to meet the special requirements of their applications. The third component involves the state-wide Kansas Research and Education Network (KANREN) that will connect the campus to a regional collection point at high speeds. Fourthly, K-State will participate in the establishment of that regional collection point (GigaPoP) as an institutional member of the Great Plains Network. Finally, this Great Plains GigaPoP routing node, located in Kansas City, will connect to the nearest vBNS connection point. This is the circuit for which NSF funding is requested.

C.2 Meritorious Research Projects Requiring High Speed Network Connectivity

Kansas State University has identified research projects in six major areas that have applications with wide area network requirements not readily satisfied by the commodity Internet: high performance scientific computing, high energy physics, software verification, digital libraries, soybean simulation model, and ASIC design. For each application area, the K-State scientists are identified, collaborators at other institutions listed, and network requirements described. Much excitement was generated when the K-State research community was apprised of the possibility of connecting to other institutions over the vBNS. As a result, several other research projects are listed that, while not highly meritorious, would benefit greatly from a vBNS connection since they are limited by current wide area technologies and involve collaboration with researchers at other major institutions. These projects are listed in section C.2.7.

C.2.1 High Performance Scientific Computing Applications

High performance computing applications in the sciences and engineering at Kansas State University (K-State) has undergone a big leap in the past three years with the establishment of a state-of-the-art computational and visualization facility, highlighted by the 48-processor HP/Convex Exemplar SMP's, with partial funds from NSF--ARI and MRI grants in 1994 and 1997, respectively. This activity has brought together a group of scientists and engineers in four key research areas declared as "grand challenge" initiatives by the National Academy of Sciences. These research projects share the common thread of high-performance computing applied to the simulation of physical, chemical, or biological systems. Simulation coupled with visualization allows researchers to carry out "computer experiments" to observe complicated phenomena that are extremely difficult to isolate in the laboratory. The three seemingly disparate research areas: modeling of novel materials and macro-molecules, atomic and molecular structure and collisions, and reactive fluid dynamics employ similar algorithms and visualization techniques in the pursuit of a more fundamental understanding of the system at hand. These three areas have a common link with the fourth initiative, engineering software development for parallel scientific application (section C.2.3), which provides the tools necessary for the large scale simulations required to bridge the gap between the microscopic and the macroscopic realms.

In parallel with the above developments, NSF-EPSCoR together with the State of Kansas has funded a subset of investigators in these four research areas at K-State, the University of Kansas (KU) and Wichita State University through the Kansas Center for Advanced Scientific Computing (KCASC) for collaborative research. In this connection, a computational facility consisting of a 16-processor SGI-Origin2000 was installed at KU in 1997. To give these research groups further legitimacy, centers for Scientific Supercomputing were formed at K-State and KU with approval of the Kansas Board of Regents in 1996. High speed connectivity and large bandwidth for data transfer between the computational facilities at K-State and KU will greatly increase the level of collaborative activity of these researchers.

Finally, the availability of funds through this vBNS grant would provide another impetus in the collaborative research that these same group of high performance computing experts are seeking (or continuing) through the recent NSF initiatives at the National Computational Science Alliance (NCSA),Urbana, Illinois, and the National Partnership for Advanced Computational Infrastructure (NPACI), San Diego, California, in which they are partners. A brief overview of the research interests of some of the scientists and engineers who have a stake in the funding of this proposal for a high speed connection to the vBNS follows.

Computer Modeling of Materials: Surfaces and Nanocrystallines. [KARA97] [KUER97] Dr. Talat S. Rahman, Department of Physics, Kansas State University; Dr Brian Laird and Benjamin Leimkuhler, University of Kansas; Dr. Lubos Mitas, University of Illinois, Urbana; Dr. John Connolly and co-workers, University of Kentucky; and Dr. Dwight Jennisen, Sandia Laboratories.


Computer Modeling of Materials: Polymers at Interfaces. [CHEN97] Dr. Amitabha Chakrabarti, Department of Physics, Kansas State University; Dr. Benjamin Leimkuhler, University of Kansas; Dr. S. D. Mahanti and Dr. Aniket Bhattacharya, Michigan State University; Dr. John Marko, University of Illinois, Chicago.

Tribological Properties of Advanced Materials
. [JIANG96] Dr. Shaoyi Jiang, Department of Chemical Engineering, Kansas State University; Dr. William Goddard, California Institute of Technology.

Structural and Computational Study of Protein-Solvent Interactions.
[SMITH94] Dr. Ramaswamy Krishnamoorthi and Dr. Paul E. Smith, Department of Biochemistry, Kansas State University; Dr. Krystov Kuczera, University of Kansas; Dr. Tjerk P. Straatsma, Pacific North National Laboratory, Washington..

C.2.2 High Energy Physics Research
Dr. Ron A. Sidwell, Dr. Neville W. Reay, Dr. Timothy A. Bolton, Dr. Donna L. Naples, Dr. Noel R. Stanton, Department of Physics, Kansas State University, with collaborators at Fermi National Laboratory, Japan, Korea, Israel, Greece, and 10 other Universities.

The High Energy Physics group (HEP) is leading or is a major contributor to five experiments based primarily at Fermi National Lab. Collaborations consist of up to 100 physicists and an equal number of support staff and students. For example, the COSMOS experiment has collaborators in Japan, Korea, Israel, and Greece, and 10 universities in the USA. The HEP group at Kansas State University consists of five faculty, three post-doctoral researchers, six graduate students, an administrative assistant, a technician, and typically 3-12 undergraduate research assistants. Total funding for FY98 from the U.S. DOE and NSF is more than $1,000,000. Four major experiments requiring high speed network connectivity will be described below. The limited performance of the commercial Internet has severely hindered the collaborations among researchers involved in these projects. A connection to the vBNS would facilitate the following high performance network needs of the HEP group: on-demand update of software source code, libraries and support data (distributions involve tens of thousands of files up to a gigabyte in size); movement of greater than 1 GB data files (the E791 experiment has a database exceeding 50 terabytes); real-time viewing and manipulation of data being acquired; and video-conferencing with widely dispersed collaborators (ISDN standard is inadequate).

E815 Experiment, "NuTeV". T. Bolton, D. Naples (DOE Outstanding Junior Investigator), three graduate students, Kansas State University; Collaborators (with one scientist per institution listed): University of Cincinnati (R. A. Johnson), Columbia University (M. Shaevitz), Fermi National Accelerator Lab (R. H. Bernstein), Northwestern University (H. Schellman), University of Rochester (A. Bodek), and University of Wisconsin (W. H. Smith).
E791 Experiment. D. Mihalcea, N. Reay, R. Sidwell, N. Stanton, S.-W. Yang, and S. Yoshida, Kansas State University; Collaborators (with one scientist per institution listed): Fermi National Accelerator Lab (J. Appel), Stanford University (P. Burchat), University of Cincinnati (M. Sokoloff), IIT (R. A. Burnstein), U Massachusetts, Amherst (G. Blaylock), U of Mississippi (J.J. Reidy), Princeton University (A. J. Schwartz), University South Carolina (M. Purohit), Tufts University (A. Napier), University Wisconsin (M. Sheaff), Yale University (A. Slaughter).

E872 Experiment, "DONUT". P. Berghaus, M. Kubantsev, N.W. Reay, R. A. Sidwell, N. R. Stanton, S. Yoshida, Kansas State University; Collaborators (with one scientist per institution listed): Fermi National Accelerator Lab (B. Lundberg), University of Pittsburgh (V. Paolone), University of South Carolina (C. Rosenfeld), Tufts University (W. Oliver), University of Minnesota (K. Heller).

E803 Experiment, "COSMOS". M. Kubantsev, D. Naples, N. W. Reay, R. Sidwell, N. Stanton, Kansas State University; Collaborators (with one scientist per institution listed): University of California at Davis (P. Yager), UCLA (D. Cline), Fermi National Accelerator Lab (S. Childress), Illinois Institute of Technology (IIT) (R. Burnstein), Indiana University (J. Musser), University of Michigan (R. Thun), U. of South Carolina (C. Rosenfeld), Tufts University (J. Schneps), Washington University, St Louis (R. Binns), as well as researchers from Japan, Korea, Israel, and Greece.

C.2.3 Software Verification [CLARK86] [DWYER94] [DYWER98a] [DWYER98b] [MANN95]
Dr. Matthew B. Dwyer, Department of Computing and Information Sciences, Kansas State University; Dr. George Avrunin, Dr. Lori Clarke, and Dr. Leon Osterwell, University of Massachusetts; Dr. James Corbett, University of Hawaii.

This 4-year, 1.4 million dollar NSF project performs statistically-sound empirical evaluation of techniques for finite-state verification (FSV) of software. The project involves running different FSV tools on a large population of software systems and requirements specifications and studying the accuracy and performance of the verification runs. For this project, a large repository is being established to store: FSV tools, applications, specification, verification results, statistical analyses and logs of problematic runs. Diagnosis of problems and analysis of these data will be shared across project members. The scale of this project and the distributed nature of expertise among project investigators implies that rich collaborative environments will be required. For example, diagnosing the causes of problematic verification runs will require shared interactive execution of FSV tools. Control over execution may shift among Hawaii, Massachusetts and Kansas depending on the nature of the diagnostic process. This kind of shared application space would also be of great use in demonstrating the capabilities and use of FSV tools as part of our efforts to transition FSV technology into industry. Support for multi-party audio/video conversations and shared work-spaces would significantly improve these interactions. Since such environments are not possible on the commodity Internet, a vBNS bandwidth is needed to conduct this collaborative research.


C.2.4 File Servers for Digital Libraries
[ANDRE97] [ANDRE98] [SMITH96]
Dr. Daniel Andresen, Department of Computing and Information Sciences, Kansas State University
Members of the Parallel and High Performance Processing team of the Alexandria Digital Library Project at the University of California, Santa Barbara (project director, Dr. Terence R. Smith, and the leadership of the PHPP team. The team includes Computer Science Dept. chair Dr. Oscar Ibarra, Dr. Tao Yang, Dr. Klaus Schauser, Dr. Omer Egecioglu, and others); San Diego Supercomputer Center.

Dr. Andresen collaborates with members of the Parallel and High Performance Processing team (PHPP) at the University of California, Santa Barbara (UCSB) regarding scheduling distributed and parallel digital library applications such as image processing, database query, and content distillation. These applications typically involve moving large amounts of data combined with major computational requirements and must be allocated properly to insure the fastest possible response times and assist project scalability. Properly distributing Internet requests and maximizing the use of existing computational and analytical resources demands high-bandwidth networking capabilities.

There are several major research thrusts which would benefit greatly from very-high-speed Internet connectivity between research sites. First, as information becomes distributed beyond simple server clusters into physically and logically separate locations, effectively scheduling computations across heterogeneous server and storage resources requires an efficient computational transfer mechanism. Without such an infrastructure, most computation is limited to the data storage locale, causing significant difficulties when computation requires data from multiple sources or the computation resources available at data storage centers are insufficient. Additionally, continuing research in the national digital library projects, particularly those at Stanford University and UCSB, in conjunction with the San Diego Supercomputer Center, indicates that just such an information architecture will be the reality for next-generation distributed knowledge systems.

Performance and scalability issues are especially important for the Alexandria Digital Library (ADL) project at UCSB, with which members of the faculty at K-State are conducting research. The fundamental goal of this project is to provide users with the ability to access and process broad classes of spatially-referenced materials from the Internet. Materials that are currently in the collections of ADL and accessible through the ADL World Wide Web (WWW) server include geographically-referenced items such as digitized maps, satellite images, digitized aerial photographs, and associated metadata. When fully developed, ADL will comprise a set of nodes distributed over the Internet supporting such library components as collections, catalogs, interfaces, and ingest facilities. The collections planned for include millions of items requiring terabyte levels of storage. The catalog component alone contains a metadatabase of significant size. Many collection items have sizes in the gigabyte range while others require extensive processing to be of value in certain applications. At K-State, we are collaborating with members of the PHPP team of the ADL project regarding scheduling distributed and parallel digital library applications.

Our collaborators include the project director, Dr. Terence R. Smith, and the leadership of the PHPP team which includes Computer Science Dept. chair Dr. Oscar Ibarra, Dr. Tao Yang, Dr. Klaus Schauser, Dr. Omer Egecioglu, and others. Dr. Daniel Andresen, of Kansas State University, is a member of the ADL Systems and PHPP teams; and is actively contributing to the systems research segment proposed for the next generation of ADL.

Typical digital library (DL) applications include image processing, database query, and content distillation. These applications usually access large amounts of data and have major computational requirements. They must be allocated properly across the DL structure to insure the fastest possible response times and assist project scalability. For example, a user might request a subimage of a Landsat picture from ADL at UCSB, requiring both significant amounts of data and computation for completion. If the data were stored at K-State, the current Internet architecture would virtually demand the computation for the request be moved from ADL at UCSB to K-State, despite the fact that K-State's computer systems might be heavily overloaded. If high-bandwidth communication links were available, delivering the data to the powerful servers at UCSB could lead to significant overall time savings for the user and better load balancing across the various DL sites. Properly distributing Internet requests and maximizing the use of existing computational and analytical resources demands high-bandwidth networking capabilities like those of vBNS. The emergence of gigabit information superhighways further enhances the vision of transparently accessible world-wide global computing resources.

C.2.5 Multidimensional Parameter Estimation for Soybean Model
Dr. Stephen M. Welch, Department of Agronomy, Kansas State University; Dr. James W. Jones, University of Florida; Dr. William D. Batchelor, Iowa State University.

Currently, the United Soybean Board is funding a multi-state project to adapt an existing soybean simulation model to on-farm decision support. The model performs well in this application given the availability of appropriate "genetic coefficients" to describe the growth habits of individual soybean varieties. However, direct measurement of these values can take longer than the commercial lifetime of a new variety. Alternatively, the coefficients can be estimated by fitting the model to the large sets of less-focussed variety testing data collected routinely in most states. Unfortunately, the current implementation of the model is microcomputer legacy code whose rewrite could take two years. Dr. Welch is working on an alternative to the above which consists of parallel processing on networked microcomputers. A system of 100 Pentium 166 machines could execute 50,000 iterations of a global optimization routine with a moderately large data set in 200 hours and thereby simulate 50 million growing seasons in about one week. The soybean modeling community is closely knit but distributed across the soybean belt. Collaboration and hardware availability mandate that the optimization network be similarly distributed. In coarse-grained MIMD systems, it is necessary to maximize the ratio of compute time to communications delay. Interprocessor messages are short in these application and node computations requiring 15-30 min are easily arranged. This leaves network latency as the major limiting factor. Therefore, a low-latency, interstate network such as the vBNS is necessary for the completion of this project.

C.2.6 ASIC Design for a Global Tracking System
Dr. Don M. Gruenbacher, Department of Electrical and Computer Engineering; Wil Devereus and Lloyd Linstrom, Johns Hopkins University Applied Physics Lab

Dr. Gruenbacher currently is developing an Application Specific Integrated Circuit (ASIC) for use in processing Global Positioning System (GPS) signals in spaceflight applications. Specifically, this system will provide precise spacecraft position and attitude measurements. Under a contract with the Johns Hopkins University Applied Physics Laboratory (JHU/APL), Dr. Gruenbacher is required to use design software tools on computers at the JHU/APL. Current Internet capabilities severely limit his ability to perform simulations of the system design because of the large amount of textual and graphical data that must be exchanged between JHU/APL and Kansas State University. The vBNS is vital for realistic real-time simulations involving large amounts of graphical data to be performed on remote computing facilities.

C.2.7 Other Research Projects That Would Benefit from a vBNS Connection

Much interest in high-performance wide area network connectivity was generated when the research community at Kansas State University was apprised of the possibility of a high speed connection to their collaborators via the vBNS. While not deemed to be major projects with substantial bandwidth or bounded latency requirements, the following applications nonetheless would benefit from a vBNS connection because they involve collaboration with other major research institutions and have network communications requirements not currently satisfied by the commodity Internet. The diversity of these projects reflects the broad range of disciplines at K-State that would benefit from the proposed vBNS connection.


C.3 Network Engineering Plan

C.3.1 Current K-State Network Infrastructure

Network Core and Distribution

The core of Kansas State University's current network is a switched full duplex 100 Mbps collapsed ethernet backbone connecting Cisco Systems, Inc. routers (see figure 1).

The network is distributed to each campus building via three fiber stars with multiple single and multi-mode fiber cables running to each building. Each star is serviced by one or more routers connected to the backbone switch at 10 or 100 Mbps. Buildings with historically lower bandwidth requirements are connected to core routers at 10 Mbps which are in turn connected to the backbone switch at 10 Mbps. Buildings with key campus servers and higher bandwidth needs are connected to high performance core routers at full duplex 100 Mbps which are in turn connected to the core switch at full duplex 100 Mbps. Network connections to each building are continually monitored to prioritize and anticipate the need for greater connectivity.

Over the past two years, upgrades to the core and distribution network infrastructures have been made in response to and in anticipation of high speed connectivity needs on campus. Some upgrades were also initiated when K-State became a charter member of the Internet2 project. Upgrades included installation of the backbone switch, the addition of Cisco 7000 and 7513 routers in the core, and the replacement of building entry point shared hubs with ethernet switches capable of supporting multiple high speed technologies (Fast Ethernet, 100VG, FDDI, and ATM in use, Gigabit ethernet planned). Most of these devices are Cisco Catalyst series switches.

Fiber optic cables connect several outlying campus centers and leased T1 lines connect other remote sites, including the K-State campus in Salina, Kansas.

Building Infrastructure to the Servers and Desktops

Networks within buildings are a mixture of shared 10 Mbps, switched 10 Mbps, switched 100 Mbps ethernet, 100VG, FDDI, and ATM. A gigabit ethernet network will soon be installed in the Department of Computing and Information Sciences. The campus cable plant is a mixture of Category 3 and 5 twisted pair, coax, and fiber. All new installations are either Category 5 copper or, when distance is an issue, fiber optic cable.

Concurrent with the enhancements in the core and distribution segments of the campus network, infrastructure within the buildings is being upgraded. Shared hubs are being replaced with ethernet switches where performance problems have been identified. All new installations use switches instead of hubs. As for the cable plant, a long-term project is underway to re-wire old buildings that have obsolete and problematic network wiring. Fiber infrastructure is being installed to reach new state-of-the-art terminal rooms to distribute data throughout the building at speeds of at least 100 Mbps. Construction on the first building is under way with two more planned for this fiscal year. This design allows for flexible growth by distributing appropriate network technologies in the terminal rooms throughout the buildings in response to either localized or wide area requirements.

Internet Connectivity

Wide area networking to the commodity Internet and other Kansas education institutions, including the University of Kansas, is provided by the Kansas Research and Education Network, a consortium of higher education institutions, K-12 school districts, and other non-profit organizations in the state of Kansas. KANREN's T1 backbone connects K-State to other KANREN members and provides redundancy for Internet connectivity. KANREN also maintains three T1 circuits to connect K-State to the commodity Internet.

C.3.2 Planned High-Speed Campus Infrastructure

The short-term plan for the core is to retain the switched ethernet backbone and connect additional buildings at full duplex 100 Mbps ethernet to the core routers. All buildings involved in the primary research projects described above in section C.2 will be connected to the core at full duplex 100 Mbps within the next year and equipped with a high performance switch to serve the needs of the researchers in those buildings. Installing high-performance modular switches such as the Cisco Catalyst models 3200 and 5000 provide the flexibility to connect buildings to the core with different high-speed technologies as the needs change.

Long-term plans call for adding an ATM core initially at OC-3 speeds to connect the high performance core routers and selected buildings. The current switched ethernet backbone will be retained to provide redundant paths in the core.

Plans are also underway to run fiber to K-State's Manufacturing Learning Center in an industrial park located several miles from the main campus. This will replace the leased T1 currently connecting the Center and extend high-speed access to that site. Related to this project is the replacement of the T1 serving the K-State campus in Salina, Kansas, with a T3 circuit. Both projects are expected to be completed within the next year.

Within buildings, shared hubs will continue to be replaced by switching devices and Category 3 cable replaced by Category 5 and fiber optic cabling. K-State will initially provide at least a switched 10 Mbps connections to the desktop of each researcher with a meritorious vBNS application. In cases where 10 Mbps is inadequate, switched 100 Mbps ethernet connections will be provided. Concurrent with this will be switched 100 Mbps connections to servers and 100 Mbps backbones within buildings. After ATM is deployed in the core, ATM may be used as the backbone technology within the buildings and even to the desktop and/or selected servers.

While these improvements are at least partially motivated by the need to provide high-speed access to the vBNS and Internet2 sites for specified researchers, all users in their respective buildings will benefit from the increased bandwidth to their building and in the core. No restrictions will be placed initially on these connections that will limit the commodity use by every faculty, staff, and student user at Kansas State University. In essence, Quality of Service (QoS, see section C.3.4) guarantees will be provided by over-provisioning the network. However, QoS developments of Internet2 and the research community will be monitored closely and deployed in our core and distribution in order to provide end-to-end guarantees to the researchers.

C.3.3 High-Speed WAN/vBNS Connectivity

C.3.3.1 KANREN

The Kansas Research and Education Network (KANREN, http://www.kanren.net) is a consortium of institutions of higher education, K-12 school districts, and other non-profit organizations within the state of Kansas. KANREN provides three T1's each into both Kansas State University and the University of Kansas for connectivity to the commodity Internet. KANREN currently has a T1 backbone that traverses the state and provides redundancy for these Internet connections.

The KANREN backbone consists of T1 circuits between the campuses at Manhattan (Kansas State University), Lawrence (University of Kansas), Kansas City (University of Kansas Medical Center), and Wichita (Wichita State University) (see figure 2). This backbone is currently in the process of being upgraded to prepare for connectivity to the Great Plains Network at DS-3 speeds through an ATM connection. Connectivity to the commodity Internet will likewise be upgraded during this project.

As major users of the KANREN backbone and Internet connectivity, Kansas State University and the University of Kansas will provide much of the funding for the KANREN backbone upgrade. Both institutions have been intimately involved in KANREN since its inception and are instrumental in the planning process for the upgrades. KANREN's network engineers have a long history of designing and building campus and regional networks. They were also
instrumental in engineering MIDnet, one of the first regional NSFNET networks. They are currently network specialists associated with the University of Kansas.

KANREN is committed to developing a network which will support the protocols necessary to sustain connectivity for Kansas State University and the University of Kansas to the vBNS through the Great Plains Network. This includes QoS either at the IP layer or within ATM, IPv6, and whatever future technologies emerge. To accomplish this, KANREN will collaborate extensively with Kansas State University, the University of Kansas, and the staff and membership of KANREN and the Great Plains Network.

C.3.3.2 The Great Plains Network

The Great Plains Network (GPN - http://www.greatplains.net/) is a consortium of mid-west universities, including Kansas State University, in the states of North Dakota, South Dakota, Nebraska, Kansas, Oklahoma, and Arkansas. Initial funding for the GPN comes from an EPSCoR/NSF grant awarded in August, 1997. This grant supports research in Earth Systems Science between the participating EPSCoR Universities and the Earth Resources Observation Systems (EROS) Data Center in Sioux Falls, SD. In addition, the Internet2 members within the GPN (including Kansas State University), together with the University of Missouri, have committed to expanding the role of the network to that of an Internet2 GigaPoP.

The initial design for the GigaPoP has been created and a request for proposal for connectivity has been submitted. Final design will depend on responses to the RFP. Initial operation of the GPN is expected in August, 1998.

GPN Topology and Facilities

The Great Plains GigaPoP is expected to have two routing nodes - one in Kansas City and the other at the EROS Data Center in Sioux Falls, SD (see figure 2). These collection points will house some combination of routers and/or ATM switches along with both monitoring and measurement equipment. Each of the states participating in the consortium (KANREN in the case of Kansas) will have a connection to one of the routing nodes via either an ATM switch or a router.

The locations for the two routing nodes were chosen for a variety of reasons: to minimize the lengths of the circuits, to maximize the potential for connecting to the national infrastructure, and to satisfy the requirements of the EPSCoR/NSF award. The EROS Data Center already houses several agency network connections and increasingly serves as a focal point for connections in this region. A GigaPoP routing node located at EROS enhances the ability to bring agency networks to the campuses via the GigaPoP rather than through direct connections. EROS is also the source of much of the data and other resources relevant to the scientific investigations proposed in the EPSCoR/NSF grant. The choice of Kansas City for the other GigaPoP routing node was made because it is a telecommunications focal point and is central to the states participating in the Great Plains consortium. Having two routing nodes also allows for the possibility of two, and therefore redundant, connections to the vBNS, each of which could connect to a different vBNS connection point.

The underlying network technology of the GPN is expected to be ATM since it will allow separation of commodity Internet traffic from vBNS traffic and has the greatest potential for QoS implementation.

State Demarcation and Management

The Great Plains Network will connect to each of the participating states with at least DS-3 speed at a location determined when the RFP is awarded. In the state of Kansas, the demarcation site is expected to be at a KANREN site in the Kansas City area. It will contain either an ATM switch or router administered by the GPN. This will connect the GPN to the KANREN backbone and therefore Kansas State University (see figure 2).

The appropriate peering relationships will be implemented by the state in collaboration with the GPN. Routing policy will be determined by the GigaPoP staff in accordance with the Internet2 working group on routing. Careful control of routing policy will be maintained at all levels in the operation of the GigaPoP to ensure that the appropriate policies of all connected networks are respected. Local site administrators will not have configuration control of GPN equipment, but strong collaboration is expected between the GPN staff and the participating members.
The GPN will monitor and manage all connections to the GigaPoP and coordinate communications with all connected networks, including KANREN. The membership is expected to participate actively in this process - Kansas State University and the University of Kansas are represented on the GPN management and technical teams. The expertise gained will be vital for implementing both local and state networks, and in providing local understanding of the national infrastructure.

C.3.3.3 Proposed vBNS Connectivity

Kansas State University is requesting funds to purchase a DS-3 connection from the Great Plains Network GigaPoP to the nearest vBNS connection point (see figure 2). This will most likely be from the GPN routing node in Kansas City to the vBNS connection point in Chicago. K-State will collaborate with the University of Kansas and other universities in the GPN consortium in the development of a regional aggregation point for the connection to the national infrastructure. Those institutions receiving a vBNS "Connections to the Internet" grant will therefore cooperate in organizing the vBNS and other agency connections through the GPN. As more institutions in the region require vBNS connectivity, the connection(s) between the GPN GigaPoP and the vBNS will be expanded to one or more OC-3 connections.

C.3.4 Quality of Service (QoS) Guarantees

Initially, QoS guarantees will be provided on the local campus by simple over-provisioning. Connections to vBNS researchers will be at least switched 10 Mbps to the end node with progressively greater bandwidth toward the core in ample amounts to not limit performance. The Network Systems group in Computing and Network Services at Kansas State University will work with the researchers to monitor performance and improve connectivity where necessary to ensure that the requirements of the meritorious applications are met.

For the short-term in the wide area connections, KANREN and the Great Plains Network are committed to providing QoS guarantees to vBNS traffic, most likely in the form of provisioning shared circuits. Again, Kansas State University will work closely with KANREN and the GPN to coordinate QoS efforts.

For the long-term, implementation of QoS is expected to evolve quickly toward dynamic differentiation of service classes over shared circuits at the local, regional, and national levels. Whether this happens over IP with protocols like RSVP, with the inherent QoS properties of ATM, or with a new unforeseen technology, K-State will work with KANREN, the GPN, and the national infrastructure to implement the service guarantees end-to-end and make them widely available.

C.3.5 Planning Process

Management of the campus data network at Kansas State University is the responsibility of the Network Systems group in Computing and Network Services while the cable plant is the responsibility of the Department of Telecommunications. The two units meet regularly to coordinate and plan.

The network engineering plan described in this section was produced by a team representing computing, networking, telecommunications, and multimedia technologies with input from representatives of the major applications areas described in section C.2. The plan was then presented to the high-speed connectivity team working on this proposal for final approval. This team includes technical staff, central administration, and representatives from each of the four major applications areas.

Kansas State University also collaborated extensively with representatives from the University of Kansas, KANREN, and the Great Plains Network consortium to develop the engineering plan and ensure consistency, cooperation, and compatibility among the respective high-speed networking efforts.

C.3.6 Project Management

Administrative and Technical Staff


Project Schedule

Evaluation and Dissemination of Results

D. References Cited

[ANDRE97] Andresen, D., T. Yang, and O.Ibarra, "Towards a Scalable Distributed WWW Server on Workstation Clusters," The Journal of Parallel and Distributed Computing (1997).
[ANDRE98] Andresen, D., T. Yang, O. Ibarra, and O. Egecioglu, "Adaptive Partitioning and Scheduling for Enhancing WWW Application Performance," to appear in The Journal of Parallel and Distributed Computing (1998).
[CHEN97] Chen, H. and A. Chakrabarti, "Surface-directed Spinodal Decomposition: Hydrodynamic Effects," Phys. Rev. E 55, 5680 (1997).
[CLARK86] Clarke, E.M., E.A. Emerson, and A.P. Sistla, "Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications," ACM Trans. On Prog. Lang. and Systems, 8(2), pp. 244-263 (1986).
[DIMIT96] Dimitrov, D.A. and G.M. Wysin, "Lifetime of vortices in 2D easy-plane ferromagnets," Phys. Rev. B 53, 8539(1996).
[DWYER94] Dwyer, M.B. and L.A. Clarke, "Data Flow Analysis for Verifying Properties of Concurrent Programs," Software Engineering Notes, 19(5), pp. 62-75 (1994).
[DWYER98a] Dwyer, M.B., G.S. Avrunin, and J.C. Corbett, "Property Specification Patterns for Finite-state Verification," Proceedings of the 2nd ACM Workshop on Formal Methods in Software Practice (1998).
[DWYER98b] Dwyer, M.B., J. Hatcliff, and M. Nanda, "Using Partial Evaluation to Enable Verification of Concurrent Software," to appear in ACM Computing Surveys (1998).
[FOX96] Fox, R.O, "On velocity-conditioned scalar mixing in homogeneous turbulence." Phys. Fluids 8, 2678 (1996).
[GUI97] Gui, A.A. J.K. Shultis, and R.E. Faw, "Response Functions for Neutron Skyshine Analyses," Nucl. Sci. Engg. 128, 11 (1997).
[JIANG96] Jiang, S., S. Dasgupta, M. Blanco, R. Frazier, E.S. Yamaguchi, Y. Tang, and W.A. Goddard III, "Structure and Vibrations of Dithiophosphate Wear Inhibitors by Ab Initio Quantum Mechanics and Molecular Mechanics," J. Phys. Chem. 100, 15760 (1996).
[KUER97] Kuerpick, U., A. Kara, and T.S. Rahman, "The Role of Lattice Vibrations in Atom Diffusion," Phys. Rev. Lett. 78, 1086 (1997).
[KARA97] Kara, A., S. Durukanoglu, and T.S. Rahman, "Vibrational Dynamics and Thermodynamics of Ni(977)," J. Chem. Phys. 106, 2031 (1997).
[KUANG96] Kuang, J. and C.D. Lin, "Comprehensive convergence tests of two-center AO close-coupling calculations for the excitation and ionization of atomic hydrogen by keV protons," J. Phys. B 29, 5443 (1996).
[KURP96] Kurpick, P. And U. Thumm, "Basic Matrix Element in Ion Surface-Interactions," Phys. Rev. A 54, 1487 (1996).
[MANN95] Manna, Z. And A. Pnueli, Temporal Verification of Reactive Systems, Springer-Verlag (1995).
[SMITH94] Smith, P.E. and B.M. Pettitt, "Modeling solvent in biomolecular systems," J. Phys. Chem. 98, 9700 (1994).
[SMITH96] Smith, T.R., D. Andresen, L. Carver, R. Dolin, C. Fischer, J. Frew, M. Goodchild, O. Ibarra, R. Kemp, R. Kothuri, M. Larsgaard, B. Manjunath, D. Nebert, J. Simpson, A. Wells, T. Yang, and Q. Zheng, "A Digital Library for Geographically Referenced Materials," IEEE Computer, 29(5), pp. 54-60 (1996). (Note erratum in IEEE Computer, 29(7), p. 14).
[ZAKR96] Zakrzewski, V.G., O. Dolgounitcheva, and J.V. Ortiz, "Ionization Energies of Anthracene, Phenanthrene, and Naphthacene," J. Chem. Phys. 105, 8748 (1996).


F. Budget Justification

Personnel

Dr. Elizabeth A. Unger, Vice Provost for Academic Services and Technology, will provide administrative oversight for the project at 0.05 FTE basis for 24 calendar months. Harvard Townsend, UNIX and Networking Manager for Computing and Network Services will serve as Co-PI and provide technical and operational management support for the project on a 0.1 FTE basis. Dr. Jeanette Harold, Director of K-State's Information Technology Assistance Center will serve as Co-PI and provide user support to the project on a 0.1 FTE. Total matching funds contribution for salary, wages, and benefits (direct cost) are $46,460. The benefit rate is 27.28%. Matching funds will also pay indirect costs of $21,372 at a rate of 46%.

Equipment

Matching funds in the amount of $25,000 will be used to purchase an ATM switch to support the vBNS connection at the Great Plains Network GigaPoP routing node in Kansas City.

Circuit Costs

Circuit costs for a connection to the vBNS consist of three components: build KANREN's high-speed backbone network and connect K-State to it, connect KANREN's backbone to the Great Plains Network, and connect the Great Plains Network to the vBNS. Kansas State University will pay an estimated $7500 per month in matching funds for the DS-3 circuit in the KANREN backbone necessary to connect the campus to the Great Plains Network and therefore the vBNS. The circuit connecting KANREN to the Great Plains Network is covered for two years by the EPSCoR grant that established the Great Plains Network. Kansas State University will pay for this connectivity after the two-year EPSCoR grant period. This proposal is requesting NSF funds, along with EPSCoR co-funding, to share the costs of the DS-3 circuit to connect the Great Plains Network GigaPoP routing node to the nearest vBNS connection point. The monthly circuit costs from MCI for a DS-3 connection from the Kansas City area to the vBNS connection point in Downer's Grove, IL, are as follows:

Campus Network Infrastructure Improvements

As part of this project, Kansas State University has committed to upgrade the campus network infrastructure to provide at least switched 10 Mbps ethernet, and in some cases, 100 Mbps ethernet, to the end-users involved in the meritorious research projects. This requires upgrades to our connection to KANREN (100 Mbps full duplex ethernet), upgrades to connections between the core and campus buildings (to at least 100 Mbps full duplex ethernet), and to infrastructures within buildings. Half of the buildings involved in the research projects are already connected at 100 Mbps to the core backbone and therefore require no additional expense. Router interfaces, switches, and modular switch interfaces must be purchased to connect the remaining buildings. Within buildings, a number of locations need new wiring in addition to new network switches. Some locations will require fiber optic cable due to distance limitations. Over the two year grant period, total expected costs of local infrastructure improvements directly associated with the meritorious applications are:

Budget Summary

Continuing Commitment Beyond Grant Period

After the two year funding period for the GPN EPSCoR grant and the NSF Connections to the Internet grant, Kansas State University is committed to continuing support and funding for all portions of the project - the local infrastructure, KANREN, the Great Plains Network, and the vBNS connection.


H. Facilities, Equipment and other Resources

Resources available at Kansas State University to perform the proposed project are described in the Network Engineering Plan in section C.3.


Home | Search | What's New | Help | Comments
Kansas State University
February 3, 1998