Teaching Machines to Code


Applying recent advances in Artificial Intelligence to synthesize programs from descriptions.

As part of our work on Teaching Machines to Code, we are building Deep Learning models that are reading or writing code in a tree format. After trying to manage this complexity in TensorFlow, I’ve decided to give a try to PyTorch.

PyTorch is a framework built by Facebook AI researchers and has been growing in popularity in Natural Language and Reinforcment Learning research community. It’s main benefit is in dynamic graph building principle — compared to Tensorflow, where graph is built once and then “executed” many times, PyTorch allows to dynamically rebuild graph using simple Python logic, as if you were doing computation with numpy arrays.

PyTorch visualizatoin

This flexibility attracted people who work with complex input/output data [e.g. language, trees, graphs] or need to run some custom logic in the middle of the computation [e.g. Deep RL].

Here I want to talk about batching things. Even though PyTorch is fast by using GPU accelerators and in general pushing computation on C modules, if you are not batching your computation — you are still going to pay the toll.

Recursive neural network [TreeLSTM as an example] are especially hard to batch, as each example is a different tree.

We have built a library TorchFold to make it simple to batch anything, however complicated and dynamic architecture you may have.

MORE

Recently we have been exploring applications of code understanding technology we are building. Code search as an applicaiton naturally comes to mind.

Being developers ourselves, we use code search everyday for various types of questions. The span of questions ranges from asking for definition of specific function or API, looking up implementation of some functionality to a description of what we want to do. Currently one would use Google, StackOverflow and Github for all of these. Going from one website to another, using Github search for in-repo search.

As an alternative, we can index all the functions, classes and APIs in a single search index. We can embed the implementations of functions using neural network pre-trained on aligning of the text and code [from our generator network]. Then at retrieval time we are going to lookup tokens in regular index and encode text using the other part of neural network. Finding nearest neighbors in the encoded code space, can augment regular search techniques with additional “semantic” meaning.

We have a quick demo of such functionality available here:

Code Search Demo

Let us know if you have any comments at contact@near.ai.

MORE

Hi everyone,

Today we are going to show few demos of Sequence to Sequence models for code completion.

In the first demo we trained a seq2seq model, more on which below, on all the Accepted java solutions on CodeForces. The goal of the model is to predict the next token based on all the tokens seen so far. We then plug the output of the model into a code editor to see how it behaves when one actually solves a competitive programming problem. Here we are solving problem A from Codeforces Round 407:

Code Search Demo

Notice how early on the model nearly perfectly predicts all the tokens, which is not surprising, since most of the Java solutions begin with rather standard imports, and the solution class definition. Later it perfectly predicts the entire line that reads n, but doesn’t do that well predicting reading k, which is not surprising, since it is a rather rare name for the second variable to be read.

There are several interesting moments in the video. First, note how after int n = it predicts sc, understanding that n will probably be read from the scanner (and while not shown in the video, if the scanner name was in, the model would have properly predicted in after int n =), however when the line starts with int ans =, it then properly predicts 0, since ans is rarely read from the input.

The second interesting moment is what happens when we are printing the answer. At first when the line contains System.out.println(ans it predicts a semicolon (mistakenly) and the closing parenthesis as possible next tokens, but not - 1, however when we introduce the second parenthesis System.out.println((ans, it then properly predicts -1, closing parenthesis, and the division by two.

You can also notice a noticeable pause before the for loop is written. This is due to the fact that using such artificial intelligence suggestions completely turns off the natural intelligence the operator of the machine possesses :)

One concern with such autocomplete is that in the majority of cases most of the tokens are faster to type than to select from the list. To address it, in the second demo we introduce beam search that searches for the most likely sequences of tokens. Here’s what it looks like:

Code Search Demo

Here there are more rough edges, but notice how early on the model can correctly predict entire lines of code.

Currently we do not condition the predictions on the task. Partially because number of tasks available on the Internet is too small for a machine learning model to predict anything reasonable (so, please help us fix it by participating here: https://r-nn.com). Once we have a working model that is conditioned on the statement, we expect it to be able to predict variable names, snippets to read and write data and computing some basic logic.

Let’s review Seq2Seq models in the following section.

MORE

This post is a first post in a series of posts discussing some interesting advances in using machine learning for both writing and executing code. This particular post is about a machine learning model proposed early last year by Scott Reed from University of Michigan, then an intern at Google DeepMind, called Neural Programmer-Interpreters.

Neural Programmer-Interpreters

Neural programmer-interpreter, or NPI for short, is a machine learning model that learns to execute programs given their execution traces. This is different from some other models, such as Neural Turing Machines, that learn to execute programs only given example input-output pairs.

Internally NPI is a recurrent neural network. If you are unfamiliar with recurrent neural networks, I highly recommend reading this article by Andrej Karpathy, that shows some very impressive results of very simple recurrent neural networks: The Unreasonable Effectiveness of Recurrent Neural Networks.

The setting in which NPI operates consists of an environment in which a certain task needs to be executed, and some low level operations that can be performed in such environment. For example, a setting might comprise:

  • An environment as a grid in which each cell can contain a digit; and a cursor in each row of the grid pointing to one of the cells;

  • A task of adding up two multidigit numbers;

  • Low-level operations including “move cursor in one of the rows” and “write a digit”:

NPI

MORE