You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 2 Oct 2021 • Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal

Specifically, we show that any (possibly randomized) algorithm must make $\mathsf{Q} = \Omega(\frac{n^2}{k^2\chi^4(p||q)}\log^2n)$ adaptive queries (on expectation) to the adjacency matrix of the graph to detect the planted subgraph with probability more than $1/2$, where $\chi^2(p||q)$ is the Chi-Square distance.

no code implementations • 2 Sep 2021 • Sami Davies, Arya Mazumdar, Soumyabrata Pal, Cyrus Rashtchian

Mixtures of high dimensional Gaussian distributions have been studied extensively in statistics and learning theory.

no code implementations • 19 Jul 2021 • Arya Mazumdar, Soumyabrata Pal

With universality, it is known that $\tilde{\Theta}(k^2)$ 1bCS measurements are necessary and sufficient for support recovery (where $k$ denotes the sparsity).

no code implementations • NeurIPS 2021 • Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal

In this work, we study the number of measurements sufficient for recovering the supports of all the component vectors in a mixture in both these models.

1 code implementation • NeurIPS 2021 • Wasim Huleihel, Arya Mazumdar, Soumyabrata Pal

In particular, we provide algorithms for fuzzy clustering in this setting that asks $O(\mathsf{poly}(k)\log n)$ similarity queries and run with polynomial-time-complexity, where $n$ is the number of items.

no code implementations • 29 Jan 2021 • Wasim Huleihel, Soumyabrata Pal, Ofer Shayevitz

One of the main surprising observations in our experiments is the fact our algorithm outperforms other static algorithms even when preferences do not change over time.

no code implementations • NeurIPS 2020 • Venkata Gandikota, Arya Mazumdar, Soumyabrata Pal

We look at a hitherto unstudied problem of query complexity upper bound of recovering all the hyperplanes, especially for the case when the hyperplanes are sparse.

no code implementations • ICML 2020 • Arya Mazumdar, Soumyabrata Pal

Mixture of linear regressions is a popular learning theoretic model that is used widely to represent heterogeneous data.

no code implementations • 19 Jan 2020 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal

Our second approach uses algebraic and combinatorial tools and applies to binomial mixtures with shared trial parameter $N$ and differing success parameters, as well as to mixtures of geometric distributions.

no code implementations • NeurIPS 2019 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal

Ourtechniques are quite different from those in the previous work: for the noiselesscase, we rely on a property of sparse polynomials and for the noisy case, we providenew connections to learning Gaussian mixtures and use ideas from the theory of

no code implementations • 30 Oct 2019 • Akshay Krishnamurthy, Arya Mazumdar, Andrew Mcgregor, Soumyabrata Pal

In the problem of learning mixtures of linear regressions, the goal is to learn a collection of signal vectors from a sequence of (possibly noisy) linear measurements, where each measurement is evaluated on an unknown signal drawn uniformly from this collection.

no code implementations • NeurIPS 2019 • Wasim Huleihel, Arya Mazumdar, Muriel Médard, Soumyabrata Pal

In this paper, we look at the more practical scenario of overlapping clusters, and provide upper bounds (with algorithms) on the sufficient number of queries.

no code implementations • 31 Mar 2019 • Arya Mazumdar, Soumyabrata Pal

In this paper, we show that a recently popular model of semi-supervised clustering is equivalent to locally encodable source coding.

no code implementations • 29 Jun 2018 • Raj Kumar Maity, Arya Mazumdar, Soumyabrata Pal

Recently Ermon et al. (2013) pioneered a way to practically compute approximations to large scale counting or discrete integration problems by using random hashes.

no code implementations • 12 Apr 2018 • Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, Barna Saha

Our next contribution is in using the connectivity of random annulus graphs to provide necessary and sufficient conditions for efficient recovery of communities for {\em the geometric block model} (GBM).

no code implementations • NeurIPS 2017 • Arya Mazumdar, Soumyabrata Pal

In this paper, we show that a recently popular model of semisupervised clustering is equivalent to locally encodable source coding.

no code implementations • 16 Sep 2017 • Sainyam Galhotra, Arya Mazumdar, Soumyabrata Pal, Barna Saha

To capture the inherent geometric features of many community detection problems, we propose to use a new random graph model of communities that we call a Geometric Block Model.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.