Machine learning can be a fantastic tool for creators, but integrating AI into your workflow is a challenge for those who cant code. A new program called Runway ML aims to make this process easier by providing artists, designers, filmmakers, and others with an “app store” of machine learning applications that can be activated with a few clicks.

Say youre an animator on a budget who wants to turn a video of a human actor into a 3D model. Instead of hiring expensive motion capture equipment, you could use Runway to apply a neural network called “PosetNet” to your footage, creating wireframe models of your actor that can then be exported for animation.

Or say you need to remove a coffee cup that was accidentally left in a shot on your high-budget fantasy TV drama. You could edit it out the traditional way, painting over the cup by hand, or you could run your footage through a machine learning segmentation model, which would automatically highlight different objects in each frame to make your job easier.

Examples like these are just the tip of the iceberg for Runway, which co-founder Cristóbal Valenzuela describes as radically egalitarian tool. “Machine learning is a very exclusive technology,” Valenzuela tells The Verge. “But I want to make things more inclusive; to get people from different backgrounds sitting around the table and using these models.”

Runway began as Valenzuelas thesis project at the Tisch School of the Arts at New York University. After getting enthusiastic feedback from the AI art community, he decided to take the program mainstream, asking two school friends to come on board as co-founders and gathering seed money from NYC and Silicon Valley backers. The company was incorporated last December with a beta launch following this January.

Valenzuela straddles the fields of art and code and says he wants to bridge these two worlds, empowering non-coders to use machine learning models and, in turn, connecting researchers to the people who will benefit directly from their work.

In a blog post Valenzuela wrote last May, he compares the current AI art scene to painting in the 16th and 17th centuries. At that time, the act of simply storing and using paint was something of a craft secret, with painters relying on esoteric techniques involving pigs bladders and string. But with the invention of the paint tube in 1841, the craft became more accessible. It was also easier to conduct outdoors, leading to new styles and movements.

As 19th century painter Pierre-Auguste Renoir told his son: “Without colors in tubes, there would have been no Cézanne, no Monet, no Sisley or Pissarro, nothing of what the journalists were to call Impressionism.” In other words: accessibility begets creativity.

But what is the “paint tube” for modern artists? Valenzuela makes a convincing argument that it might be Runway — or, at least, a program that looks a lot like it.

The pigs bladders holding back progress in this case is the skill-set currently needed to use machine learning models. That means learning to use software like TensorFlow or PyTorch; it might mean buying a few pricey GPUs (because your current computer wont run these systems), or connecting to an AWS instance instead. None of these tasks are beyond the reach of non-coders, but they certainly take time and create a bottleneck for users. By comparison, Runways model is the perfect paint tube: just click and go.

The company is not the first to make AI models easier to use of course. But earlier examples — like Lobe, which let users train AI systems using a visual interface before it was bought by Microsoft — have focused on business use cases, rather than creative ones.

Runways target market is obvious when you load up its store front, which lets you browse a range of models that run the gamut from text generation to motion tracking.

You click to see the details of each model, click to add it to your workspace, set up your inputs and outputs, then start the system running. There are hooks to connect these outputs to other apps (so you can send ML-processed images to Photoshop, for example), and users can import new models directly from GitHub with just a few lines of code.

This latter point is one of the most important for Runway. Its hard to understate just how fast-paced and collaborative the current AI art scene is, and how much individuals benefit from one anothers work and professional research. No sooner does a new model get released than its pounced on by the masses who use it in all sorts of unexpected ways.

Take the AI text generation system called GPT-2, unveiled in February by research lab OpenAI. In the months since it was launched, GPT-2 has been turned into accessible web apps, its been used to help write a novel, and someone even created a subreddit populated entirely by chatbots mimicking other subreddits using it.

In short: this is a bubbling and energetic scene, and Runway wants to stay as connected to it as possible. Valenzuela says his team is constantly responding to users feedback, adding new models to the program and updating the softwares interface every month.

Its this connection to the community and speed of updating that he says will also stop Runway from getting overrun by the industrys established players. “Were just four people on our team,” he says. “We like to ship things fast, to get people excited and get them using things.” And although corporate giants like Adobe certainly have a lot of interest in similar AI applications, theyre necessarily going to be slower to integrate them into products.

The @runwayml beta is one of the most magical tools i've played with in a while– being able to spin up and play with models without writing a line of code is going to change so much pic.twitter.com/IGAoIPbDRL

— Will Manidis (@WillManidis) June 5, 2019