Seminar WS 2015/16
Dieses Semester behandeln wir Themen aus dem Bereich Mobile Software. Hier gibt es in Kürze mehr Infos, wie man sich einschreiben kann.
Hier die Lie Liste der Papers / Themen die zur Auswahl stehen:
Mobile games, and especially multiplayer games are a very popular daily distraction for many users. We hypothesise that commuters travelling on public buses or trains would enjoy being able to play multiplayer games with their fellow commuters to alleviate the commute burden and boredom. We present quantitative data to show that the typical one-way commute time is fairly long (at least 25 minutes on average) as well as survey results indicating that commuters are willing to play multiplayer games with other random commuters. In this paper, we present GameOn, a system that allows commuters to participate in multiplayer games with each other using p2p networking techniques that reduces the need to use high latency and possibly expensive cellular data connections. We show how GameOn uses a cloud-based matchmaking server to eliminate the overheads of discovery as well as show why GameOn uses Wi-Fi Direct over Bluetooth as the p2p networking medium. We describe the various system components of GameOn and their implementation. Finally, we present numerous results collected by using GameOn, with three real games, on many different public trains and buses with up to four human players in each game play.
This paper presents Kahawai1, a system that provides high-quality gaming on mobile devices, such as tablets and smartphones, by offloading a portion of the GPU computation to server-side infrastructure. In contrast with previous thin-client approaches that require a server-side GPU to render the entire content, Kahawai uses collaborative rendering to combine the output of a mobile GPU and a server-side GPU into the displayed output. Compared to a thin client, collaborative rendering requires significantly less network bandwidth between the mobile device and the server to achieve the same visual quality and, unlike a thin client, collaborative rendering supports disconnected operation, allowing a user to play offline - albeit with reduced visual quality.
Kahawai implements two separate techniques for collaborative rendering: (1) a mobile device can render each frame with reduced detail while a server sends a stream of per-frame differences to transform each frame into a high detail version, or (2) a mobile device can render a subset of the frames while a server provides the missing frames. Both techniques are compatible with the hardware-accelerated H.264 video decoders found on most modern mobile devices. We implemented a Kahawai prototype and integrated it with the idTech 4 open-source game engine, an advanced engine used by many commercial games. In our evaluation, we show that Kahawai can deliver gameplay at an acceptable frame rate, and achieve high visual quality using as little as one-sixth of the bandwidth of the conventional thin-client approach. Furthermore, a 50-person user study with our prototype shows that Kahawai can deliver the same gaming experience as a thin client under excellent network conditions.
Mobile devices have less computational power and poorer Internet connections than other computers. Computation offload, in which some portions of an application are migrated to a server, has been proposed as one way to remedy this deficiency. Yet, partition-based offload is challenging because it requires applications to accurately predict whether mobile or remote computation will be faster, and it requires that the computation be large enough to overcome the cost of shipping state to and from the server. Further, offload does not currently benefit network-intensive applications.
In this paper, we introduce Tango, a new method for using a remote server to accelerate mobile applications. Tango replicates the application and executes it on both the client and the server. Since either the client or the server execution may be faster during different phases of the application, Tango allows either replica to lead the execution. Tango attempts to reduces user-perceived application latency by predicting which replica will be faster and allowing it to lead execution and display output, leveraging the better network and computation resources of the server when the application can benefit from it. It uses techniques inspired by deterministic replay to keep the two replicas in sync, and it uses flip-flop replication to allow leadership to float between replicas. Tango currently works for several unmodified Android applications. In our results, two computation-heavy applications obtain up to 2-3x speedup, and five network applications obtain from 0 to 2.6x speedup.
Motivated by safety challenges resulting from distracted pedestrians, this paper presents a sensing technology for fine-grained location classification in an urban environment. It seeks to detect the transitions from sidewalk locations to in-street locations, to enable applications such as alerting texting pedestrians when they step into the street. In this work, we use shoe-mounted inertial sensors for location classification based on surface gradient profile and step patterns. This approach is different from existing shoe sensing solutions that focus on dead reckoning and inertial navigation. The shoe sensors relay inertial sensor measurements to a smartphone, which extracts the step pattern and the inclination of the ground a pedestrian is walking on. This allows detecting transitions such as stepping over a curb or walking down sidewalk ramps that lead into the street. We carried out walking trials in metropolitan environments in United States (Manhattan) and Europe (Turin). The results from these experiments show that we can accurately determine transitions between sidewalk and street locations to identify pedestrian risk.
The wearable revolution, as a mass-market phenomenon, has finally arrived. As a result, the question of how wearables should evolve over the next 5 to 10 years is assuming an increasing level of societal and commercial importance. A range of open design and system questions are emerging, for instance: How can wearables shift from being largely health and fitness focused to tracking a wider range of life events? What will become the dominant methods through which users interact with wearables and consume the data collected? Are wearables destined to be cloud and/or smartphone dependent for their operation?
Towards building the critical mass of understanding and experience necessary to tackle such questions, we have designed and implemented ZOE - a match-box sized (49g) collar- or lapel-worn sensor that pushes the boundary of wearables in an important set of new directions. First, ZOE aims to perform multiple deep sensor inferences that span key aspects of everyday life (viz. personal, social and place information) on continuously sensed data; while also offering this data not only within conventional analytics but also through a speech dialog system that is able to answer impromptu casual questions from users. (Am I more stressed this week than normal?) Crucially, and unlike other rich-sensing or dialog supporting wearables, ZOE achieves this without cloud or smartphone support - this has important side-effects for privacy since all user information can remain on the device. Second, ZOE incorporates the latest innovations in system-on-a-chip technology together with a custom daughter-board to realize a three-tier low-power processor hierarchy. We pair this hardware design with software techniques that manage system latency while still allowing ZOE to remain energy efficient (with a typical lifespan of 30 hours), despite its high sensing workload, small form-factor, and need to remain responsive to user dialog requests.
The smartphone has become an important part of our daily lives. However, the user experience is still far from being optimal. In particular, despite the rapid hardware upgrades, current smartphones often suffer various unpredictable delays during operation, e.g., when launching an app, leading to poor user experience. In this paper, we investigate the behavior of reads and writes in smartphones. We conduct the first large-scale measurement study on the Android I/O delay using the data collected from our Android application running on 2611 devices within nine months. Among other factors, we observe that reads experience up to 626% slowdown when blocked by concurrent writes for certain workloads. Additionally, we show the asymmetry of the slowdown of one I/O type due to another, and elaborate the speedup of concurrent I/Os over serial ones. We use this obtained knowledge to design and implement a system prototype called SmartIO that reduces the application delay by prioritizing reads over writes, and grouping them based on assigned priorities. SmartIO issues I/Os with optimized concurrency parameters. The system is implemented on the Android platform and evaluated extensively on several groups of popular applications. The results show that our system reduces launch delays by up to 37.8%, and run-time delays by up to 29.6%.
Voice control has emerged as a popular method for interacting with smart-devices such as smartphones, smartwatches etc. Popular voice control applications like Siri and Google Now are already used by a large number of smartphone and tablet users. A major challenge in designing a voice control application is that it requires continuous monitoring of user?s voice input through the microphone. Such applications utilize hotwords such as "Okay Google" or "Hi Galaxy" allowing them to distinguish user?s voice command and her other conversations. A voice control application has to continuously listen for hotwords which significantly increases the energy consumption of the smart-devices.
To address this energy efficiency problem of voice control, we present AccelWord in this paper. AccelWord is based on the empirical evidence that accelerometer sensors found in today?s mobile devices are sensitive to user?s voice. We also demonstrate that the effect of user?s voice on accelerometer data is rich enough so that it can be used to detect the hotwords spoken by the user. To achieve the goal of low energy cost but high detection accuracy, we combat multiple challenges, e.g. how to extract unique signatures of user?s speaking hotwords only from accelerometer data and how to reduce the interference caused by user?s mobility.
We finally implement AccelWord as a standalone application running on Android devices. Comprehensive tests show AccelWord has hotword detection accuracy of 85% in static scenarios and 80% in mobile scenarios. Compared to the microphone based hotword detection applications such as Google Now and Samsung S Voice, AccelWord is 2 times more energy efficient while achieving the accuracy of 98% and 92% in static and mobile scenarios respectively.
The idea of augmented reality - the ability to look at a physical object through a camera and view annotations about the object - is certainly not new. Yet, this apparently feasible vision has not yet materialized into a precise, fast, and comprehensively usable system. This paper asks: What does it take to enable augmented reality (AR) on smartphones today? To build a ready-to-use mobile AR system, we adopt a top-down approach cutting across smartphone sensing, computer vision, cloud offloading, and linear optimization. Our core contribution is in a novel location-free geometric representation of the environment - from smartphone sensors - and using this geometry to prune down the visual search space. Metrics of success include both accuracy and latency of object identification, coupled with the ease of use and scalability in uncontrolled environments. Our converged system, OverLay, is currently deployed in the engineering building and open for use to regular public; ongoing work is focussed on campus-wide deployment to serve as a "historical tour guide" of UIUC. Performance results and user responses thus far have been promising, to say the least.
==========MobiSys 2014 ========
Yan Wang (Stevens Institute of Technology), Jie Yang (Oakland University), Yingying Chen (Stevens Institute of Technology), Hongbo Liu (Indiana University-Purdue University Indianapolis), and Marco Gruteser and Richard P. Martin (Rutgers University)
Kiryong Ha, Zhuo Chen, Wenlu Hu, and Wolfgang Richter (Carnegie Mellon University), Padmanabhan Pillai (Intel Labs), and Mahadev Satyanarayanan (Carnegie Mellon University)
Ruogu Zhou and Guoliang Xing (Michigan State University)
Shaxun Chen, Amit Pande, and Prasant Mohapatra (UC Davis)
Jonathan Crussell, Ryan Stevens, and Hao Chen (UC Davis)
Lenin Ravindranath (MIT), Suman Nath and Jitendra Padhye (Microsoft Research), and Hari Balakrishnan (MIT)
Shuai Hao and Bin Liu (University of Southern California), Suman Nath (Microsoft Research), and William G.J. Halfond and Ramesh Govindan (University of Southern California)
Ardalan Amiri Sani, Kevin Boos, Min Hong Yun, and Lin Zhong (Rice University)
Yin Yan, Shaun Cosgrove, Varun Anand, Amit Kulkarni, Sree Harsha Konduri, Steven Y. Ko, and Lukasz Ziarek (SUNY Buffalo)
Shahriar Nirjon (University of Virginia) and Jie Liu, Gerald DeJean, Bodhi Priyantha, Yuzhe Jin, and Ted Hart (Microsoft Research, Redmond, WA)
Alex Mariakakis (University of Washington) and Souvik Sen, Jeongkeun Lee, and Kyu-Han Kim (HP Labs)
AMC: Verifying User Interface Properties for Vehicular ApplicationsKyungmin Lee, Jason Flinn (University of Michigan), T.J. Giuli (Ford Motor Company), Brian Noble (University of Michigan), Christopher Peplin (Ford Motor Company)