logo
You might also like : Artificial Intelligence

This paper presents a simple method for "do as I do" motion transfer: ... 8450 retweets

arxiv.org
This paper presents a simple method for "do as I do" motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We p...

AI and Deep Learning in 2017 1159 retweets

www.wildml.com
The year is coming to an end. I did not write nearly as much as I had planned to. But I’m hoping to change that next year, with more tutorials around Reinforcement Learning, Evolution, and Ba…

ML beyond Curve Fitting: An Intro to Causal Inference and do-Calculus 966 retweets

www.inference.vc
You might have come across Judea Pearl's new book, and a related interview which was widely shared in my social bubble. In the interview, Pearl dismisses most of what we do in ML as curve fitting. While I believe that's an overstatement (conveniently...

10 Exciting Ideas of 2018 in NLP 605 retweets

ruder.io
This post gathers 10 ideas that I found exciting and impactful this year—and that we'll likely see more of in the future. For each idea, it highlights 1-2 papers that execute them well.

Oxford Deep NLP 2017 course 521 retweets

github.com
Oxford Deep NLP 2017 course. Contribute to oxford-cs-deepnlp-2017/lectures development by creating an account on GitHub.

Frontiers in Natural Language Processing Expert Responses 510 retweets

docs.google.com
Frontiers in Natural Language Processing Expert Responses Organized by Herman Kamper, Sebastian Ruder, and Stephan Gouws at the Deep Learning Indaba 2018 You can find the slides of the session here. Alta de Waal 2 Anders Søgaard 2 Annie Louis 3 Ba...

Bias detectives: the researchers striving to make algorithms fair 506 retweets

www.nature.com
Nature just published a major feature on researchers working on bias in machine learning. Features many of us, including geomblog mikarv s010n achould b_mittelstadt rayidghani dgrobinson bowlinearl - and the AINowInstitute work on due proces...

Deep Learning for Siri’s Voice: On-device Deep Mixture Density Network... 489 retweets

machinelearning.apple.com
Apple Machine Learning Journal publishes posts written by Apple engineers about their work using machine learning technologies to help build innovative products for millions of people around the world.

Troubling Trends in Machine Learning Scholarshippdf 454 retweets

www.dropbox.com
Well here goes. Our ICML Debates paper is live: "Troubling Trends in Machine Learning Scholarship". If anyone needs me, I'll be in witness protection. 🙄

We’re gearing up for the 2019 edition of Stanford CS224N: Natural Lang... 442 retweets

web.stanford.edu
We’re gearing up for the 2019 edition of Stanford CS224N: Natural Language Processing with Deep Learning. Starts Jan 8—over 500 students enrolled—using PyTorch—new Neural MT assignments—new lectures on transformers, subword models, and human language...

Feels weird to be playing inside of a recurrent neural network’s hallu... 385 retweets

worldmodels.github.io
Feels weird to be playing inside of a recurrent neural network’s hallucination of a Doom level. Demo→

Launching Cutting Edge Deep Learning for Coders: 2018 edition · fastai 369 retweets

www.fast.ai
Launching Cutting Edge Deep Learning for Coders: 2018 edition I know a lot of folks have been waiting for this - hope it meets your expectations!

Speech and Language Processing (3rd ed draft) Dan Jurafsky and James H 357 retweets

web.stanford.edu
Here's the Autumn 2018 release of draft chapters for Speech and Language Processing! Enjoy!

Complete draft of a new textbook for NLP: Thanks to everyone who gav..... 340 retweets

github.com
Complete draft of a new textbook for NLP: Thanks to everyone who gave me edits and corrections! Stop me at NAACL2018 and I'll buy you a beer or a beignet.

OpenNMT - Open-Source Neural Machine Translation 336 retweets

opennmt.net
Excited to introduce OpenNMT ( an open-source neural machine translation developed for industrial and academic use.

Speech and Language Processing (3rd ed draft) Dan Jurafsky and James H 315 retweets

web.stanford.edu
Speech and Language Processing 3rd ed. partial draft of 21 chapters at Thanks to all you readers for advice/typos!

Ask HN: Does ML research ever get translated to industry? 308 retweets

news.ycombinator.com
My answer to why many ML research advances do not get translated into industry:

To Build Truly Intelligent Machines, Teach Them Cause and Effect 303 retweets

www.quantamagazine.org
“AI is currently split. There are those who are intoxicated by the success of deep learning and neural nets. They want to continue to fit curves. But when you talk to people who have done work in AI outside statistical learning, they get it immediate...

Making floating point math highly efficient for AI hardware 302 retweets

code.fb.com
Rethinking floating point for deep learning - Jeff Johnson at FAIR - proposes non-linear floating point math -- more energy efficient, accurate - no retraining or quantization before deployment - Verilog, C++, PyTorch implementations available

An Adversarial Review of “Adversarial Generation of Natural Language” 295 retweets

medium.com
[edit: some people commented that they don’t like the us-vs-them tone and that “deep learning people” can — and some indeed do — do good NLP work. To be clear: I fully agree. NotAllDeepLearners ]…

Extracting Automata from Recurrent Neural Networks Using Queries and ... 272 retweets

arxiv.org
We present a novel algorithm that uses exact learning and abstraction to extract a deterministic finite automaton describing the state dynamics of a given trained RNN. We do this using Angluin's L* algorithm as a learner and the trained RNN as an ora...

Smeritycom: Understanding the Mixture of Softmaxes (MoS) 259 retweets

smerity.com
The mixture of softmaxes, which currently achieves SotA on word level language modeling (PTB + WikiText-2), fixes a fairly fundamental flaw in softmax, with likely broader implications.

A Review of the Neural History of Natural Language Processing 255 retweets

blog.aylien.com
New blog post: A Review of the Recent History of Natural Language Processing. The 8 biggest milestones in the last ~15 years of NLProc. From our NLP session at DeepIndaba. _aylien

Machine Learning Glossary  |  Google Developers 253 retweets

developers.google.com
Compilation of key machine-learning and TensorFlow terms, with beginner-friendly definitions.

Word embeddings in 2017: Trends and future directions 252 retweets

ruder.io
Word embeddings are an integral part of current NLP models, but approaches that supersede the original word2vec have not been proposed. This post focuses on the deficiencies of word embeddings and how recent approaches have tried to resolve them.

A new course to teach people about fairness in machine learning 251 retweets

www.blog.google
Delighted to announce Google's launch of Introduction to Fairness in Machine Learning! A collaboration between many of us, including my team. =) Also features a short video (with me =P) speaking to human biases in the ML cycle, and many other resourc...

GANs are Broken in More than One Way: The Numerics of GANs 235 retweets

www.inference.vc
Last year, when I was on a mission to "fix GANs" I had a tendency to focus only on what the loss function is, and completely disregard the issue of how do we actually find a minimum. Here is the paper that has finally challenged that attitu...

Deep Learning, Language and Cognition 232 retweets

video.ias.edu
Deep Learning, Language and Cognition: Video of an introductory talk on computational linguistics for a broad audience—from hand-written rules to modern neural net models—by Christopher Manning (chrmanning) at IAS, Princeton. NLProc

OpenAI Trains Language Model, Mass Hysteria Ensues 215 retweets

approximatelycorrect.com
***OpenAI Trains Language Model, Mass Hysteria Ensues*** New Post on Approximately Correct digesting the code release debate and media shitstorm.

The Big-&-Extending-Repository-of-Transformers: Pretrained PyTorch mo 214 retweets

github.com
Here is an op-for-op PyTorch re-implementation of GoogleAI's BERT model by sanhestpasmoi, timrault and I. We made a script to load Google's pre-trained models and it performs about the same as the TF implementation in our tests (see the readme). ...

Neural Reading Comprehension and Beyond 213 retweets

purl.stanford.edu
Teaching machines to understand human language documents is one of the most elusive and long-standing challenges in Artificial Intelligence. This thesis tackles the problem of reading comprehension...

Pytorch implementations of various Deep NLP models in cs-224n(Stanfor... 210 retweets

github.com
Pytorch implementations of many of the deep learning NLP models discussed in cs224n by Kim SungDong. dlearn NLProc

The multilingual BERT model is out now (earlier than anticipated). It ... 210 retweets

github.com
The multilingual BERT model is out now (earlier than anticipated). It covers 102 languages and features an extensive README motivating certain preprocessing and modelling choices.

Contextual Word Representations: A Contextual Introduction 204 retweets

arxiv.org
This introduction aims to tell the story of how we put words into computers. It is part of the story of the field of natural language processing (NLP), a branch of artificial intelligence. It targets a wide audience with a basic understanding of comp...

How to Train Your ResNet | myrtleai 203 retweets

www.myrtle.ai
This is stunning work from dcpage3, who has smashed our DAWNBench CIFAR10 training record, and written a fascinating and detailed series explaining all the improvements he made. All researchers interested in model speed or accuracy need to read thi...

CS 11-747: Neural Networks for NLP 198 retweets

phontron.com
2019 edition of CMU "Neural Networks for NLP" is starting tomorrow! We'll post slides/lecture videos, feel free to follow along 2019 brings new classes on contextualized word representations (ELMo/BERT) and model interpretation; PyTorch/DyNet code e...

Deep Learning, Structure and Innate Priors 198 retweets

www.abigailsee.com
Is structure a necessary good or a necessary evil? The video of ylecun and chrmanning's discussion on Deep Learning, Structure and Innate Priors is online! Watch the entire conversation, and read a summary of it, here: Stanford DeepLearning ...

Introduction to Learning to Trade with Reinforcement Learning 186 retweets

www.wildml.com
Thanks a lot to aerinykim, suzatweet and hardmaru for the useful feedback! The academic Deep Learning research community has largely stayed away from the financial markets. Maybe that’s be…

Do CIFAR-10 Classifiers Generalize to CIFAR-10? 183 retweets

arxiv.org
Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to sele...

Universal Language Model Fine-tuning for Text Classification 178 retweets

arxiv.org
Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer lear...

Smeritycom: Peeking into the neural network architecture used for Goog 171 retweets

smerity.com
What started as a summary of Google's Neural MT paper ended as a "from ground up" description of the architecture :)

Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Seque... 163 retweets

seq2seq-vis.io
Seq2Seq-Vis: A visual nn debugger targeting S2S models. Hooks interactively into toolkits like OpenNMT. ( with IBMResearch harvardvcg)

Open Questions 162 retweets

decanlp.com
website is up! Slides motivating true multitask learning in AI and NLP from a recent talk:

Four deep learning trends from ACL 2017 161 retweets

www.abigailsee.com
Structure's back, subword's in, and we're rethinking our assumptions. The dlearn trends of acl2017nlp, a blog post

Code accompanying our EMNLP paper Learning Language Representations f... 157 retweets

github.com
GitHub - chaitanyamalaviya/lang-reps: Code accompanying our EMNLP paper Learning Language Representations for Typology Prediction

Toxic Comment Classification Challenge | Kaggle 150 retweets

www.kaggle.com
NLP researchers, if you've got some play-time this holiday season, be sure to try out the new Kaggle NLP classification competition. I think we're going to see results smashing past SoTA here - could be like the ILSVRC 2012 of NLP!

A Course in Machine Learning 149 retweets

ciml.info
Super happy to announce new major edit of new chapters on bias, str. pred. & imitation; other edits, fixes & figs!

An Overview of Multi-Task Learning in Deep Neural Networks 144 retweets

ruder.io
Multi-task learning is becoming more and more popular. This post gives a general overview of the current state of multi-task learning. In particular, it provides context for current neural network-based methods by discussing the extensive multi-task ...

No, Facebook Did Not Panic and Shut Down an AI Program That Was Gettin... 138 retweets

gizmodo.com
In recent weeks, a story about experimental Facebook machine learning research has been circulating with increasingly panicky, Skynet-esque headlines.

Evaluating Text Output in NLP: BLEU at your own risk 132 retweets

medium.com
One question I get fairly often from folks who are just getting into NLP is how to evaluate systems when the output of that system is text, rather than some sort of classification of the input text…

The Generalization Mystery: Sharp vs Flat Minima 132 retweets

www.inference.vc
I set out to write about the following paper I saw people talk about on twitter and reddit: Hao Li, Zheng Xu, Gavin Taylor, Tom Goldstein Visualizing the Loss Landscape of Neural Nets It's related to this pretty insightful paper: Laurent Dinh, Razvan...

A Syntactic Neural Model for General-Purpose Code Generation 125 retweets

arxiv.org
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the und...

Limitations of Deep Learning for Vision, and How We Might Fix Them 124 retweets

thegradient.pub
History of Deep Learning We are witnessing the third rise of deep learning. The first two waves — 1950s–1960s and 1980s–1990s — generated considerable excitement but slowly ran out of steam, since these neural networks neither achieved their promised...

Earning My Turns: A (computational) linguistic farce in three acts 123 retweets

www.earningmyturns.org
Prologue I had not blogged for 3 years. Many plausible excuses, but the big reason is that it is easier to dash a tweet or a short incid...

2019_03_20_GTC -- Deep Learning and Robotics -- Abbeelpdf 123 retweets

www.dropbox.com
My slides from this morning's keynote at GTC19. Covered: - Few-Shot Reinforcement Learning - Leveraging Simulation - Model-based RL - Learning Representations for Exploration - Few-Shot Imitation Learning

TensorFlow code and pre-trained models for BERT 122 retweets

github.com
TensorFlow code and pre-trained models for BERT. Contribute to google-research/bert development by creating an account on GitHub.

Exploring the Limits of Language Modeling 122 retweets

arxiv.org
In this work we explore recent advances in Recurrent Neural Networks for large scale Language Modeling, a task central to language understanding. We extend current models to deal with two key challenges present in this task: corpora and vocabulary si...

CoNaLa: The Code/Natural Language Challenge 121 retweets

conala-corpus.github.io
Just released "CoNaLa", a dataset/contest for broad-coverage generation of programs from English commands: 2,879 manually annotated examples, and 600k mined from StackOverflow to increase coverage; super-excited to bring NL->Code to the open doma...

Best Paper Awards - NAACL-HLT 2019 119 retweets

naacl2019.org
We're delighted to announce the NAACL 2019 best papers. Check them out!

On the Origin of Deep Learning 118 retweets

arxiv.org
This paper is a review of the evolutionary history of deep learning models. It covers from the genesis of neural networks when associationism modeling of the brain is studied, to the models that dominate the last decade of research in deep learning l...

Smeritycom: Backing off towards simplicity 117 retweets

smerity.com
In deep learning, when we lose accurate baselines, we lose our ability to accurately measure our progress over time

Computer Science > Computer Vision and Pattern Recognition 114 retweets

arxiv.org
Transfer learning is a cornerstone of computer vision, yet little work has been done to evaluate the relationship between architecture and transfer. An implicit hypothesis in modern computer vision research is that models that perform better on Image...

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Conte... 111 retweets

arxiv.org
Transformer networks have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. As a solution, we propose a novel neural architecture, Transformer-XL, that enables Transformer t...

Pointer Sentinel Mixture Models 110 retweets

arxiv.org
Recent neural network sequence models with softmax classifiers have achieved their best language modeling performance only with very large hidden states and large vocabularies. Even then they struggle to predict rare or unseen words even if the conte...

Better Language Models and Their Implications 106 retweets

openai.com
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, ques...

Slow Learning – Sara Hooker – Medium 106 retweets

medium.com
I have put together some thoughts about the article here: I am concerned that the story in the Economist not only displaces my own narrative but also sets unrealistic expectations for anyone setting out in the field.

Multi-Task Learning Objectives for Natural Language Processing 105 retweets

ruder.io
Multi-task learning is becoming increasingly popular in NLP but it is still not understood very well which tasks are useful. As inspiration, this post gives an overview of the most common auxiliary tasks used for multi-task learning for NLP.

The Stanford Natural Language Processing Group 104 retweets

nlp.stanford.edu
We have a postdoc position available in NLP for Social Science!

1012 Week 7pdf - Google Drive 102 retweets

drive.google.com
Just gave one of the more fun lectures from my NLP class: How to find and answer a research question.

Videos in "NAACL 2018" on Vimeo 102 retweets

vimeo.com
Weren't at NAACL but want to view the talks? Were at NAACL and just want to rewatch your favorites? Either way, we have hours and hours of entertainment and education for you