Graduate Research Presentations

High-Density Clients Lead to Wi-Fi Upgrade

Presented by Haymanot Gebre-Amlak, University of Missouri – Kansas City
Haymanot Gebre-Amlak, Moe Al Mansoori, Tajul Md Islam, Daniel Cummins, Baek-Young Choi

With the introduction of Internet of Things (IoT) and subsequently more devices needing connectivity, Wi-Fi networks are experiencing increased utilization and load globally. High-density wireless networks are typically considered to be environments where the number of client devices and required application throughput exceed the available capacity of a traditional “coverage-oriented” Wi-Fi network design. Even a well-designed wireless network that provides coverage at good signal strength and SNR (signal-to-noise ratio) throughout the desired coverage area is insufficient to ensure high performance due to inadequate capacity in High-density environments because of the limited available airtime. Since Wi-Fi relies on a shared and unbounded medium, clients and access points must compete for the same available airtime to transmit data. A campus environment is an example of High-density wireless network environment where students sit in clusters and bring multiple devices to a class room. In this paper, we carry out an in-depth analysis of a university campus Wi-Fi upgrade deployment to address the High-density issue. We explore the various design consideration done when implementing Wi-Fi 802.11 ac, to improve compatibility, data rate, coverage and performance. In addition, we perform post upgrade deployment evaluation using a well-known big data analysis tools named Splunk. Our analysis shows that although the number of people in the campus have decreased this semester, both network flows and traffics have increased as much as 20% which have driven increased number of wifi devices in the network.

 


 

HPC-Bench: A Tool to Optimize HPC Benchmarking Workflow

Presented by Dr. Gianina Alina Negoita, Iowa State University
Gianina Alina Negoita, Glenn R. Luecke, Shashi K. Gadia, Gurpur M. Prabhu

Today’s complex high performance computers (HPC) are constantly evolving making it important to be able to easily evaluate the performance and scalability of parallel applications on both existing and new HPC computers. The evaluation of the performance of applications can be long and tedious. To optimize the workflow needed for this process, we have developed a tool, HPC-Bench, using the Cyclone Database Implementation Workbench (CyDIW) developed at Iowa State University. HPC-Bench integrates our workflow into CyDIW as a plain text file and encapsulates the specified commands for multiple client systems. By clicking the “run” button in CyDIW’s GUI HPC-Bench will automatically write appropriate scripts and submit them to the jobs scheduler, collect the output data for each application and then generate performance tables and graphs. Using our tool optimizes the benchmarking workflow and saves time in analyzing performance results by automatically generating performance graphs. Use of HPC-Bench is illustrated with multiple MPI and SHMEM applications which were run on NERSC’s Cray XC30 for different problem sizes and for different number of processes to measure their performance and scalability.
 


 

Improved Triangle Counting in Graph Streams: Power of Multi-Sampling

Presented by Kiana Mousavi, Iowa State University
Neeraj Kavassery-Parakkat, Kiana Mousavi Hanjani, Aduri pavan

Some of the well known streaming algorithms to estimate number of triangles in a graph stream work as follows: Sample a single triangle with high enough probability and repeat this basic step to obtain a global triangle count. For example, the algorithm due to Buriol et al. uniformly at random picks a single vertex v and a single edge e and checks whether the two cross edges that connect v to e appear in the stream. Similarly the neighborhood sampling algorithm attempts sample a triangle by randomly choosing a single vertex v, a single neighbor u of v and waits for a third edge that completes the triangle. In both the algorithms, the basic sampling step is repeated multiple time to obtain an estimate for the global triangle count in the input graph stream.

In this work, we propose a multi-sampling variant of these algorithms: In case of Buriol et al’s algorithm, instead of randomly choosing a single vertex and edge, randomly sample multiple vertices and multiple edges and and collect cross edges that connect sampled vertices to the sampled edges. In case of neighborhood sampling algorithm, randomly pick multiple edges and pick their multiple neighbors.

We provide a theoretical analysis of these algorithms and prove that this simple modification yields improves upon the known space and accuracy bounds. We experimentally show that these algorithms out perform several well known triangle counting streaming algorithms.