At WaveOne, I have been working on various exciting problems and have been developing very interesting technology, but can't share yet :)
 

Previously, I was a member of the Harvard Intelligent Probabilistic Systems group, and worked with advisor Ryan Adams on a spectrum of fundamental problems in machine learning. I was an active member of the machine learning communities at MIT and Harvard. I was a frequent contributor to the Building Intelligent Probabilistic Systems blog, and participated in the Harvard Machine Learning Tea.

 

metric_learning.png

Metric Learning with Adaptive Density Discrimination

Oren Rippel, Manohar Paluri, Piotr Dollar and Lubomir Bourdev

We propose an approach to address a number of subtle yet important issues which have stymied earlier distance metric learning algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It employs this knowledge to adaptively assess similarity, and pursue local discrimination by penalizing class distribution overlap. This idea allows us to surpass existing approaches by a significant margin on a number of tasks such as classification, training efficiency and representation saliency.

International Conference on Learning Representations (ICLR), 2016.    PDF | BibTeX
 

 

spectral.png

Spectral Representations for Convolutional Neural Networks

Oren Rippel, Jasper Snoek, and Ryan P. Adams

We argue that, beyond its advantages for efficient convolution computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks. We propose spectral pooling, which preserves considerably more information per parameter than other pooling strategies and enables flexibility in the choice of pooling output dimensionality. We also demonstrate the effectiveness of complex-coefficient spectral parameterization of convolutional filters, which leads to significantly faster convergence during training.

Neural Information Processing Systems (NIPS), 2015.    PDF | BibTeX
 

Scalable Bayesian Optimization Using Deep Neural Networks

Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat and Ryan P. Adams

We explore the use of neural networks as an alternative to Gaussian processes to model distributions over functions. While this approach performs competitively with state-of-the-art GP-based approaches, it scales linearly with the number of data rather than cubically. This allows us to achieve a previously intractable degree of parallelism, which we use to rapidly search over large spaces of models.

International Conference on Machine Learning (ICML), 2015.    PDF | BibTeX
 

MICMat Kernel Library for Xeon Phi

Oren Rippel, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat and Ryan P. Adams

I developed and maintained the MICMat (MIC Matrix) kernel library, which enables interfacing with Intel's Xeon Phi Coprocessor directly from pure Python. It presents an extensive library of primitives, optimized for high performance computation, while allowing very convenient development.

Maintained 2014 - 2016.    GitHub repository
 

Learning Ordered Representations with Nested Dropout

Oren Rippel, Michael Gelbart and Ryan P. Adams

We study ordered representations of data in which different dimensions have different degrees of importance. To learn these we introduce Nested Dropout. We rigorously show that the application of nested dropout enforces identifiability of the units. We use the ordering property to construct data structures that permit retrieval in time logarithmic in the database size and independent of the dimensionality of the representation. We also show that ordered representations are a promising way to learn adaptive compression for efficient online data reconstruction.

International Conference on Machine Learning (ICML), 2014.    PDF | video lecture | BibTeX
 

Avoiding Pathologies in Very Deep Networks

David Duvenaud, Oren Rippel, Ryan P. Adams and Zoubin Ghahramani

To help suggest better deep neural network architectures, we analyze the related problem of constructing useful priors on compositions of functions. We study deep Gaussian process, a type of infinitely-wide, deep neural net. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Finally, we characterize the model class you get if you do dropout on Gaussian processes.

Artificial Intelligence and Statistics (AISTATS), 2014.    PDF | video of 50-layer warping | BibTeX
 

High-Dimensional Probability Estimation with Deep Density Models

Oren Rippel and Ryan P. Adams

We introduce the deep density model for density estimation. We exploit insights from deep learning to construct a bijective map to a representation space, under which the transformation of the distribution of the data is approximately factorized and has identical and known marginal densities. The simplicity of the latent distribution under the model allows us to feasibly explore it, and the invertibility of the map to characterize contraction of measure across it. This enables us to compute normalized densities for out-of-sample data.

Technical report, 2013.    PDF | BibTeX