Name | Benjamin Schiller |

Position | Research Assistant |

schiller(AT)cs(DOT)tu-darmstadt(DOT)de | |

Phone | +49 (6151) 16 - 75 017 |

Fax | +49 (6151) 16 - 30 52 |

Office | S2|02 A312 |

Address | TU Darmstadt - FB 20 FG Peer-to-Peer-Netzwerke Hochschulstraße 10 D-64289 Darmstadt Germany |

- Analysis of dynamic systems
- Graph-theoretic analysis of (dynamic) network topologies
- Web crawling and distributed data collection
- Social network analysis
- Routing in distributed systems

*P2P-based Social IPTV Service Platform*- Collaboration with industry partner ETRI
- Development of a multi-tree P2P-based live streaming overlay (LSO)

*Inkrementelle Graphentheoretische Analyse Dynamischer Systeme*- Software Campus project in collaboration with industry partner Siemens
- Live analysis of traffic flows in an urban environment

*OSN Usage in Germany*- Study performed in collaboration with Prof. Oliver Hinz, started January 2012
- Analysis of the usage patterns of different Social Networks in Germany

*GTNA - Graph Theoretic Network Analyzer*- Framework allowing for the graph-theoretic analysis of network topologies
- Version 2.0 enables the rapid prototyping of routing algorithms

*DNA - Dynamic Network Analyzer*- Framework for the graph-theoretic analysis of dynamic networks
- Focus on interchangeability of data structures, metrics, and input data generators

*TDF - Task Distribution Framework*- Framework for the distribution of tasks to multiple workers
- Used for web crawling and data collection

- Web Chair of SESOC 2013
- TPC Member of AICT 2013
- TPC Member of SocInfo 2013
- TPC Member of ICCVE 2013
- Web Chair of SESOC 2014
- TPC Member of ICC'14 CISS
- TPC Member of ICCVE 2014
- TPC Member of ICC'15

- WS 13
- P2P Seminar
- P2P Networks - Exercise

- SS 13
- P2P Seminar
- DMS-P2P Seminar

- SS 11
- P2P Seminar

- WS 10
- P2P Network - Exercise

0 Entries found

0 Entries found

22 Entries found

As the popularity of video live streaming through the Internet continuously rises, lots of efforts have been made to reduce the delays and increase user experience. In this thesis, we focus on delay reduction and load balancing of peers in multi-tree based live video streaming overlays. Multi-tree overlays are designed to reduce delays, but do not guarantee the best possible results without optimization strategies, as peers with high bandwidth can be leafs, whereas low bandwidth can be interior nodes.

Missing load balancing can lead to situation, where peers have to serve more children than their upload capacity can handle. This can dramatically reduce the quality of user experience, resulting in video stalls. If such a situation occurs near the root of a tree, huge part of the overlays can suffer from that, as peers have to forward the received video segments.

We implement two different optimization strategies, handling insufficient bandwidth situations and trying to improve the topology in free bandwidth situations. One strategy focusses on the bandwidth situation of neighbored peers, called bandwidth aware strategy. The main focus of that strategy is to reduce delays by reducing average tree heights and balance load by reflecting current bandwidth situations. The second strategy is designed to reduce neighbor delays by connecting peers with locally close other peers. This strategy is called location aware optimization strategy.

In order to show that our strategies reduce delays or balance load, we evaluate them by using 49 distributed physical machines, grouped in German Lab. An measurement system is implemented, collecting data about the topology is used to clarify the benefits of our strategies, compared to the random approach. Our results show that bandwidth awareness in most of the cases reduces average delays and upload utilizations by moving high bandwidth peers towards the root, which increases the tree height. With the location aware strategy we show that delays to directly connected neighbors are reduced, which also leads to a reduction of average delays in the tree topology.

01.02.2013

Partition-based approach for a fast approximation of the clustering coefficient

Bachelor Thesis

finished

The global clustering coecient (gcc) is an important metric to analyse graphs. For large graphs, absolute approaches are not feasible, since all current absolute approaches have a high computing complexity and mostly a large memory footprint due to matrix-based calculations. In this thesis,

we provide an algorithm to get a fast approximation of the gcc even for big graphs.

In our algorithm, the graph is divided in partitions. For each partition, the absolute gcc is calculated. Each partition gcc is corrected concerning the ratio of existing edges to the maximal possible edges of the partition in relation to the ratio of the complete graph. This relation is the main contribution of this thesis, since it seems not to be previously published. The so corrected partition gccs are averaged, weighted by the node count of each partition.

We compare our algorithm to another previously published approximative algorithm on a broad set of graphs. We show that for most evaluated graphs, we can provide a signicant faster sucient approximation than the previous work. Since the time-consuming task in our approach, the calculation of the gcc for each partition, is independent for each partition, our approach can be computed easily in distributed systems, what makes it scalable for large graphs.

Der Clustering-Koezient ist eine Metrik, die bei der Analyse von Graphen benutzt wird. Durch diese Metrik wird ausgedrückt, wie viele der Nachbarn eines Knotens untereinander verbunden sind. Speziell bei Graphen 2 aus sozialen Netzwerken wird diese Metrik sehr häuggiug eingesetzt, da hierbei der Clustering-Koezient Aussagen über die Beziehungen der Benutzer untereinander ermöglicht.

Die Berechnung des Clustering-Koezienten unter Verwendung bekannter Verfahren, ist allerdings sehr speicher- und rechenintensiv und kann dementsprechend nur auf Maschinen mit viel Speicher berechnet werden. Das Ziel dieser Arbeit ist es, ein möglichst speicherezientes Verfahren zu finden, mit welchem die Berechnung des Clustering-Koezienten auch auf Maschinen mit wenig Speicher möglich ist.

Da es auf den Maschinen mit wenig Arbeitsspeicher nicht möglich ist den gesamten Graphen im Speicher zu halten, benutzen wir für unseren Ansatz einen Cache. Dieser besitzt allerdings eine beschränkte Kapazität und somit können nur eine begrenzte Anzahl an Einträgen zwischengespeichert werden. Deswegen ist es wichtig, diesen möglichst effizient zu organisieren. Für diese Organisation verwenden wir verschiedene Caching-Strategien. Die Berechnung erfolgt dabei auf Graphen aus sozialen Netzwerken. Graphen aus sozialen Netzwerken besitzen die Eigenschaft, dass Nachbarn eines Knotens strker miteinander verbunden sind als bei Graphen aus anderen Netzwerken. Demnach besitzen Knoten untereinander viele gemeinsame Nachbarn. Durch verschiedene Abarbeitungsreihenfolgen wird deshalb die Reihenfolge, wie die Knoten bearbeitet werden, so verndert, dass die nacheinander berechneten Knoten eine mglichst groe und gemeinsame Nachbarschaft besitzen. Dadurch kann die begrenzte Kapazität des Caches besser ausgenutzt werden.

Insgesamt haben wir 5 Abarbeitungsreihenfolgen und 5 Caching-Strategien auf Graphen aus sozialen Netzwerken getestet. Durch die Kombination dieser verschiedenen Strategien und einer Erweiterung für 3 Abarbeitungsreihenfolgen, haben wir entsprechend 40 verschiedene Kombinationen aus Abarbeitungsreihenfolge und Caching-Strategie getestet. Dabei konnte eindeutig nachgewiesen werden, dass die Abarbeitungsreihenfolge einen deutlichen Einfluss auf die Effektivität des Caches und die Laufzeit der Berechnung hat.

As the Internet becomes increasingly important to all aspects of society, its disruption has many bad consequences. Hence it is necessary to increase the resilience of the Internet. We dene resilience as the ability to work well, although a set of nodes of the network are removed. But it was not well

analyzed how the Internet behaves against the node removals. The purpose of this thesis is to create many attack scenarios based on various types of node removals and resilience metric. Then we use the results to analyze the resilience of the Internet.

In this thesis we analyze the networks behavior against node removals using two types of metric. The rst is based on the partition, such as the giant component, and the second on the path lengths of the networks, such as the average shortest path length. There are also two types of node removals, using a random attack and intentional attack. In the intentional attack scenario, we have dierent ways to choose the deleted nodes, one is based on the network global properties and the other is based on the local properties of each node on the network.

The result shows that we must combine these two types of metrics to have a best view on the networks. The partition based metrics give us the information when the network is fragmented, we call this point as the critical point, and the shortest paths based metrics show us more information about

the network behaviors from the beginning to this critical point. The result also indicates that the degree and betweenness centrality removals are the most ecient methods. They can nd the most vital nodes in the network. The random node removal nearly has no impact on the network. Such an analysis is not analyzed. In this thesis, all of the metrics, the attack scenarios and the network generators are taken together to get a better knowledge about them.

Darknets experience growing popularity. Their users object to the in- creasing restriction of freedom of speech and invasion of privacy by govern- mental institutions and companies. Therefore, they participate in darknets to anonymously share information with others. One of the most popular darknets is Freenet. It provides distributed le storage and uses indirect routing in order to protect the identities of content consumers and providers. In addition to its Opennet mode where nodes connect to strangers, it also oers a Darknet mode in which a node can only connect to peers it trusts. Freenet is the only popular system providing a sucient level of anonymity 8 and privacy for its members. However, since its release in 2000, only a few studies have been made to examine the behavior of Freenet nodes and the network as a whole.

In this master thesis, we conduct measurements in Freenet in order to test several hypotheses about its functioning. In particular, we give an estimation of the size of Freenets Opennet network and its structure. Therefore, we re- construct and analyze the topology in the Opennet. Moreover, we monitor user behavior and estimate churn rates and online times of single nodes. In addition, we look at the routing of le requests and their content. Complementing these Opennet measurements, we conduct a small Darknet measurement as well. We observe in particular the behavior of bridge nodes connecting the Darknet with the Opennet.

A graph theoretic approach to network resilience is presented in this thesis. Finding an ideal delete cover for a graph belongs to the NP-Complete class of problems. Heuristic derived from the Fiedler vector of a graph is derived and tested on simulations.

When working on scientic or commercial projects, problems can occur that can not be solved by a single machine, e.g., due to the amount of required computational power or bandwidth limitations. However, it is often possible to divide the problem into smaller sub-problems, which can be solved independently by many dierent machines. The results of these sub-problems are then combined to solve the overall problem. Developing applications supporting distributed problem solving can be a tedious task. Therefore it is reasonable to abstract the implementation of the actual problem solving from the functionality to distribute sub-problems and accumulate their results. Depending on the requirements of the problem, systems and libraries that provide such abstraction require dierent features. In this thesis, the requirements include the focus on simplicity and the independence from the actual implementation of the problem-solving application. According to these points, a complying framework is designed and implemented. Two example applications using distributed problem solving are created to show the applicability of the implementation according to the previously described requirements. In addition to this, dierent approaches regarding this topic are presented, and the designed framework is compared to appropriate related work.

The Internet and especially social networks connect people in this world. In the simplest way information, pictures and news can be shared with friends, acquain- tances or relatives. Using proles, fan groups and other functions, humans are transparent persons to their friends. But it does not matter if the friend is a close friend, a college romance or maybe ones super- visor. This thesis deals with this problem and tries to exclude persons from 5 information based on the degree of friendship. In the same way, it should be possible that people can share information without a direct Facebook friendship. This would make it possible that old school friends who have a certain degree of knowledge about this person, acquires new information. The aim of this thesis is to develop a program that meets these requirements. No existing program is able to cover these functions.

A Firefox extension which uses the Facebook API was written and includes several encryption procedures. These components are discussed further in Chapter 2. To get an overview of the program, Chapter 3 describes and explains the various parts of the development process. Possible further developments of this work are mentioned in Chapter 4. The Firefox extension is called KDAF and is tested under the operating system Microsoft Windows 7 and Linux Ubuntu.

Communities and roles are useful properties to analyze the structure of networks and can be used to improve routing algorithms, but so far the calculation of these properties needs global information. Good algorithms for wireless networks have to rely solely on locally available information, because the aggregation of global information generates a high message overhead. In this thesis we evaluate how roles and communities can be detected in wireless networks with minimal global information. We modify an existing community detection algorithm to run without global information and evaluate a universal role denition on wireless networks. We also proof that the number of communities in the neighborhood of each node is bound from above and introduce a new role denition for wireless networks. We evaluate the algorithm and role denition on a realistic generation model for wireless networks, as well as on a random generation model for wireless networks.

Zur Zeit erfreuen sich Live-Streaming-Systeme einer immer hoheren Beliebtheit. Da aber die klassische Client-Server-Architektur durch die steigende Anzahl von Nutzern nicht gut skaliert, wird oft auf eine Peer-to-Peer basierte Losung zuruckgegrien. Daher wurde auch an der TU Darmstadt ein solches System entwickelt. Allerdings ist dieses, wie auch andere auf Peer-to-Peer basierten Systeme, wegen seiner Komplexitat nicht leicht zu analysieren. Deswegen wurde im Rahmen dieser Arbeit ein Monitoring Server fur dieses existierende Peer-to-Peer-Streaming-System entwickelt. Dieser Monitoring Server stellt das Netzwerk grasch da und kann in dieses auch aktiv eingreifen. Dadurch kann das Netzwerk besser analysiert und getestet werden.

Routing in distributed networks, where the network structure with nodes and links between them is not to be modied, requires an identier space such that each vertex has a coordinate, in contrast to other networks where routing tables help to choose the next hop to forward a message to. Graph drawing algorithms can handle the regular computation and assignment of coordinates. Although primary designed after visual aesthetics, we will simulate and analyze their eld of application for routing purposes on graph models and existing real world networks.

This thesis concerns itself with the problem of subsampling of complex networks. The questions we attempt to answer are: Which properties of the original network are preserved? To which extend are they preserved? How does this depend on the algorithm used, including its general properties and specic parameters? How does this depend on the nature of the original network?

We group known subsampling algorithms together, attempt a classication, and sketch new ones where gaps in the classication appear. We choose a two-dimensional classication based on neighbor selection and dimensional strategy. We end up with 22 viable algorithmic variants. The network met- rics we use are network size, degree distribution, diameter, characteristic path length, and clustering coecient. The networks we subsample are ErdosRenyi, BarabasiAlbert, complete network, and three real-world networks. We carry out the evaluation using and extending a framework for which we newly implement all evaluated subsampling algorithms.

We are able to successfully link certain properties of subsampled networks to categories of subsampling algorithms (as opposed to only individual ones). Such category results include: Network size is primarily determined by neighbor selection and secondarily by dimensional strategy. Diameter and characteristic path length are best preserved overall by three variants of 9 the Depth-First Search algorithm. Cluster- ing coecient is well preserved by subsampling algorithms choosing a nite but nonzero number of neighbors in every step. Algorithms choosing no or all neighbors produce extreme results.

Anonymous Peer-to-peer systems, also called Darknets, have been a topic of interest both in research and society during the last few years. Routing on such networks is particularly challenging, since it has to rely on nothing but local information and the number of links is typically small in comparison with other social networks. Moti- vated by an undirected small world model, we introduce a Lookahead routing strategy using IDs that are chosen with regard to local struc- tures. Simulations show that our algorithm has more than twice the success rate than the one used in the current version of Freenet - to our knowledge the only system satisfying the requirements of a Darknet - at the price of a slightly higher cost per step.

The analysis of Online Social Networks and similar services requires the **acquisition of large data sets**. Commonly, **distributed web crawlers** are used to collect this kind of information. The distribution is required to circumvent rate limiting employed by the service provider and avoid bottlenecks with network communication.

Web crawlers are simple programs that fetch data as specified by an **input** such as downloading a file from a given URL. Depending of the task at hand, the data can be post-processed and potentially imply the download of further data. Commonly, a **central entity** provides commands or tasks to a set of workers that perform the tasks and return the **results**. This central server then aggregates the results and creates further tasks based on failed tasks, the returned results, and recurring requirements.

In previous work, we designed and implemented such a framework for the distribution of tasks from a central entity to a set of workers (**Task Distribution Framework**). While it supports all actions required for the execution of the aforementioned workflow, it does not include a simple way to record and output statistics. Also, the deployment of multiple parallel crawls has to be done by changing configuration files by hand.

The goal of this thesis is the **extension** of the Task Distribution Framework by logging and statistics capabilities. Furthermore, a web front-end should be **created** to display the generated statistics and the system status. Finally, a basic administration interface for **managing** different projects inside the framework should be created and deployed on **our infrastructure**, currently consisting of 15 dedicated crawling machines.

The tasks of this thesis will be to:

- Understand the concepts of the Task Distribution Framework
- Design and implement logging and statistic components
- Design and implement an administration front-end
- Deploy and test the framework on our infrastructure

Goal of this thesis it the development of a browser plugin that allow the user to easily generate simple web crawl directives and parsers for web pages.

- WS 13
- Counting triangles with limited memory

- SS 13
- Distributed Web Crawlers
- Influence in Social Networks
- Information Dissemination in Social Networks

- WS 12
- A Survey on Tooles for Graph Analysis
- Counting Triangles
- Single Pair Shortest Path Algorithms
- Single Source and All-Pairs Shortest Paths

- SS 12
- A Graph Database Comparison

- WS 11
- Censorship Resistant Publishing Systems
- Determining the Degree of Anonymity
- Development on Friend-to-Friend Networks

- SS 11
- A Classification of Graph Drawing Algorithms

- Attacks on Tor - A Survey
- WS 10
- P2P-based Live Streaming Systems
- P2P-based Video-on-Demand Streaming
- P2P-based VoIP
- Skype

Tim Grube,
Benjamin Schiller,
Thorsten Strufe

In:*Proceedings of the 2nd International Workshop on Dynamic Networks and Knowledge Discovery*,
Vol. 1229,
p. 37-48,
September 2014

[Inproceedings]

In:

[Inproceedings]

Tim Grube,
Benjamin Schiller,
Thorsten Strufe

In:*Proceedings of the 2nd International Workshop on Dynamic Networks and Knowledge Discovery (DyNaK 2014)*,
p. 37-48,
September 2014

[Inproceedings]

In:

[Inproceedings]

Resilient Tree-based Live Streaming for Mobile Scenarios

Benjamin Schiller, Giang Nguyen, Thorsten StrufeIn:

[Inproceedings]

CoMon: A System Architecture for Improving Caching in CCN - (POSTER PAPER)

Hani Salah, Benjamin Schiller, Thorsten StrufeIn:

[Inproceedings]

Resilient Tree-based Live Streaming in Reality

Benjamin Schiller, Giang Nguyen, Thorsten StrufeIn:

[Online-Edition: http://www.p2p13.org/index.php/program]

[Inproceedings]

Dynamic Network Analyzer - Building a Framework for the Graph-theoretic Analysis of Dynamic Networks

Benjamin Schiller, Thorsten StrufeIn:

[Inproceedings]

Attack Resistant Network Embeddings for Darknets

Benjamin Schiller, Stefanie Roos, Andreas Höfer, Thorsten StrufeIn:

[Inproceedings]

GTNA - A Framework for the Graph-Theoretic Network Analysis

Benjamin Schiller, Dirk Bradler, Immanuel Schweizer, Max Mühlhäuser, Thorsten StrufeIn:

[Online-Edition: 2791]

[Inproceedings]

Towards a distributed crisis response communication system

Dirk Bradler, Benjamin Schiller, Erwin Aitenbichler, Nicolas LiebauIn:

[Inproceedings]