### Speakers

**Topic: **The Art of Measurement

**Abstract: **The central goal of compressed sensing is to capture attributes of a signal using very few measurements. In most work to date this broader objective is exemplified by the important special case of classification or reconstruction from a small number of linear measurements. In the first part of this talk we use wireless information theory to derive fundamental limits on compressive classification, on the maximum number of classes that can be discriminated with low probability of error and on the tradeoff between the number of classes and the probability of misclassification. In the second part we describe how to use information theory to guide the design of linear measurements by maximizing mutual information between the measurements and the statistics of the source.

**Biography: **Robert Calderbank is Director of the Information Initiative at Duke University, where he is Professor of Mathematics and Electrical Engineering. Prior to joining Duke as Dean of Natural Sciences in 2010, he directed the Program in Applied and Computational Mathematics at Princeton University. Prior to joining Princeton in 2004 he was Vice President for Research at AT&T, in charge of what may have been the first industrial research lab where the primary focus was Big Data.

Professor Calderbank is well known for contributions to voiceband modem technology, to quantum information theory, and for co-invention of space-time codes for wireless communication. His research papers have been cited more than 30,000 times and his inventions are found in billions of consumer devices. Professor Calderbank was elected to the National Academy of Engineering in 2005 and has received a number of awards, including the 2013 IEEE Hamming Medal for his contributions to information transmission, and the 2015 Claude E. Shannon Award.

**Topic: **A Beginner's Guide to Lattice Coding

**Abstract: **Lattice codes may be viewed as linear codes designed for use over Gaussian-noise channels, and more generally, channels with real-valued input and output alphabets. It is by now well-established that lattice codes can achieve the capacity of the additive white Gaussian noise (AWGN) channel. Lattice coding schemes also provide the best-known achievable rates in many multi-user communication settings. They form the basis of the compute-and-forward strategy that lies at the heart of physical-layer (wireless) network coding. Beyond reliability of communication, lattice-based coding schemes have also been proposed for information-theoretically secure communication.

Lattice codes stand today where algebraic codes were about 25 years back. They have been shown to be theoretically capable of providing the best rates for reliable (and secure) communication in many AWGN settings, but they have struggled to fulfill their potential in practice. The main stumbling block has been finding low-complexity implementations of encoders and decoders for lattice codes in high dimensions.

In this tutorial, we will give an overview of the theory and applications of lattice codes. We will begin with the necessary mathematical definitions and background for lattices. The earliest application of lattices in the information-theoretic context is in vector quantization, which we will briefly describe. The main focus of the tutorial will be on AWGN channel coding and physical-layer network coding applications, but time permitting, we will also touch upon other multi-user communication applications. We will provide a survey of the approaches being used to make lattice coding practical, and list some of the open research problems in this field.

**Biography: **Navin Kashyap received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Bombay, in 1995, the M.S. degree in Electrical Engineering from the University of Missouri-Rolla in 1997, and the M.S. degree in Mathematics and the Ph.D. degree in Electrical Engineering from the University of Michigan, Ann Arbor, in 2001. From November 2001 to November 2003, he was a postdoctoral research associate at the University of California, San Diego. From 2004 to 2010, he was on the faculty of the Department of Mathematics and Statistics at Queen's University, Kingston, Ontario. In January 2011, he joined the Department of Electrical Communication Engineering at the Indian Institute of Science, Bangalore, as an Associate Professor. His research interests lie primarily in the application of combinatorial and analytic methods in information and coding theory. Dr. Kashyap is an Associate Editor (Coding Theory) of the IEEE Transactions on Information Theory.

**Homepage: **www.ece.iisc.ernet.in/~nkashyap/

**Topic: **Second-order asymptotics in information theory

**Abstract: **In this tutorial, we present a unified treatment of single- and multi-user problems in Shannonâ€™s information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by non-vanishing constants and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities. In particular, we focus on the so-called second-order coding rates which approximately quantify the backoff from the first-order fundamental limit at finite blocklengths.

We first discuss Strassenâ€™s seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. We subsequently use this basic hypothesis testing result to develop second-order asymptotic expansions for fixed-length lossless source and channel coding. Finally, we consider some network information theory problems for which the second-order asymptotics are known. These problems include some classes of channels with random state, the Slepian-Wolf problem and the some classes of the Gaussian interference channel.

This tutorial is designed to be relatively self-contained. The student is expected to have some background in information theory at the level of Cover and Thomas to appreciate the material.

**Biography: **Vincent Y. F. Tan is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) and the Department of Mathematics at the National University of Singapore (NUS). He received the B.A. and M.Eng. degrees in Electrical and Information Sciences from Cambridge University in 2005. He received the Ph.D. degree in Electrical Engineering and Computer Science (EECS) from the Massachusetts Institute of Technology in 2011. He was a postdoctoral researcher in the Department of ECE at the University of Wisconsin-Madison and following that, a research scientist at the Institute for Infocomm (I2R) Research, A*STAR, Singapore. His research interests include information theory, machine learning as well as statistical signal processing.

Dr. Tan received the MIT EECS Jin-Au Kong outstanding doctoral thesis prize in 2011 and the NUS Young Investigator Award in 2014. He has authored a research monograph on Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities in the Foundations and Trends in Communications and Information Theory Series (NOW Publishers).

**Homepage: **www.ece.nus.edu.sg/stfpage/vtan

**Topic: **Polar codes

**Abstract: **Polarization is a technique discovered by Arikan to construct error correcting codes for binary input channels. These codes are provably capable of achieving the `symmetric capacity' of any channel, have low encoding and decoding complexity, and are constructed in a completely explicit fashion. They remain the only family of codes which possess all these qualities. The talk will describe the codes and discuss their properties.

**Biography: **I. Emre Telatar received the B.Sc. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1986. He received the S.M. and Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1988 and 1992 respectively. In 1992, he joined the Communications Analysis Research Department at AT&T Bell Laboratories (later Lucent Technologies), Murray Hill, NJ. He has been at the EPFL since 2000.

Emre Telatar was the recipient of the IEEE Information Theory Society Paper Award in 2001. He was a program co-chair for the IEEE International Symposium on Information Theory in 2002, and associate editor for Shannon Theory for the IEEE Information Theory Transactions from 2001 to 2004. He was awarded the EPFL Agepoly teaching prize in 2005.

Emre Telatar's research interests are in communication and information theories.

**Homepage: **people.epfl.ch/emre.telatar

**Topic: **Factor Graph Transforms

**Abstract: **Transforms of functions play an important role in many applications, be it for analysis purposes or for computational complexity reasons. For example, the Fourier Transform of a time function immediately tells us if that function is bandlimited or not. Or, Fast-Fourier-Transform-based techniques can be used to efficiently compute the convolution of two discrete-time functions.

Given that factor graphs represent multivariate functions, it is not surprising that transforms play also an important role for factor graphs. In this presentation, we first present an example of a factor graph transform that helps in the analysis of the partition function of a factor graph. (Many important quantities, in particular w.r.t. the capacity of some channels, can be expressed as the partition function of a factor graph, and so a good understanding of this object and its approximations is a worthwhile endeavor.)

Afterwards, we present an example of a factor graph transform that leads to an efficient implementation of the sum-product algorithm message update rules for certain function nodes. (The sum-product algorithm is at the basis of decoding algorithms for low-density parity-check codes, a class of codes which appears in various telecommunication standards, and so efficient implementations of the sum-product algorithm are highly desirable.)

Where appropriate, we will point out connections of these factor graph transforms to gauge transforms in physics and to Valiant's holographic transforms in theoretical computer science.

(The presentation is planned to be self-contained; basics of factor graphs will be introduced as needed.)

**Biography: **Pascal O. Vontobel received the Diploma degree in electrical engineering in 1997, the Post-Diploma degree in information techniques in 2002, and the Ph.D. degree in electrical engineering in 2003, all from ETH Zurich, Switzerland.

From 1997 to 2002 he was a research and teaching assistant at the Signal and Information Processing Laboratory at ETH Zurich, from 2006 to 2013 he was a research scientist with the Information Theory Research Group at Hewlett-Packard Laboratories in Palo Alto, CA, USA, and since 2014 he has been an Associate Professor at the Department of Information Engineering at the Chinese University of Hong Kong. Besides this, he was a postdoctoral research associate at the University of Illinois at Urbana-Champaign (2002-2004), a visiting assistant professor at the University of Wisconsin-Madison (2004-2005), a postdoctoral research associate at the Massachusetts Institute of Technology (2006), and a visiting scholar at Stanford University (2014). His research interests lie in information theory, data science, communications, and signal processing.

Dr. Vontobel has been an Associate Editor for the IEEE Transactions on Information Theory (2009-2012) and an Awards Committee Member of the IEEE Information Theory Society (2013-2014). Currently, he is an Associate Editor for the IEEE Transactions on Communications, a Distinguished Lecturer of the IEEE Information Theory Society, and a TPC Co-Chair of the upcoming 2016 IEEE International Symposium on Information Theory. He has been on the technical program committees of several international conferences and has co-organized several topical workshops, most recently a workshop at Princeton University on "Counting, Inference, and Optimization on Graphs." Moreover, he has been three times a plenary speaker at international information and coding theory conferences and has been awarded the ETH medal for his Ph.D. dissertation.

**Homepage: **sites.google.com/site/pascalvontobel/

### Tentative Schedule

- Hostel check-in can be made in
**the afternoon of 7th June**. It will follow with ice-breaking activities in the evening. - Hostel check-out shall be made by
**the morning of 13th June**.

- "
**Lucky speakers**": participants will vote for the four posters they liked the most in the first two days, and the corresponding participants will get a chance to give 20 minute talks each. Prizes for being nominated. **Open problem session**: speakers, but also students, will post open problems on piazza, and we'll use piazza to vote on which problems get discussed. Prize(s) for the "most interesting questions from participants" (participant voting).**Video competition**: participants will have spent the week preparing short (2-minute) videos on any topic related to information theory. example 1 example 2 example 3