I love writing code to make things: apps, websites, charts and even music† It’s a skill I’ve been working hard on for over 20 years.
So I must confess last week’s news about the release of a new “AI assistant” coding helper called GitHub copilot gave me complicated feelings.
Copilot, which spits out code to order based on “clear English” descriptions, is a remarkable tool. But is it about to put programmers like me out of work?
Trained on billions of lines of human code
GitHub (now owned by Microsoft) is a collaboration platform and social network for developers† Think of it as a cross between Dropbox and Instagram, which is used by everyone from individual hobbyists to highly paid software engineers at large tech companies.
Over the past decade, GitHub users have uploaded tens of billions of lines of code for more than 200 million apps. That is a lot
print("hello world") explanations.
the copilot AI works like many other machine learning tools: it was “trained” by scanning and looking for patterns in those tens of billions of lines of code written and uploaded by members of the GitHub coding community.
The training can take many months, hundreds of millions of dollars worth of computer equipment, and enough electricity to run a house for ten years. Once it’s done, human coders can then write a description (in plain English) of what they want their code to do, and the Copilot AI helper will write the code for them.
Based on the Codex “language model”Copilot is the next step in a long line of “intelligent auto-completion” tools. However, these were much more limited in the past. Copilot is a significant improvement.
A surprisingly effective assistant
About a year ago I got early ‘preview’ access to Copilot and I’ve been using it on and off. It takes some practice to learn exactly how to phrase your requests in English so that the Copilot AI gives the most useful code output, but it can be surprisingly effective.
However, we are still a long way of “Hey Siri, make me a million dollar iPhone app”. It’s still necessary to use my software design skills to figure out what the different bits of code in my app should do.
Imagine you are writing an essay to understand the level at which Copilot operates. You can’t just throw the essay question at it and expect it to produce a useful, well-argued piece. But if you figure out what the argument is and maybe write the topic sentence for each paragraph, it will often work quite well to autofill the rest of each paragraph.
Depending on the type of coding I’m doing, this can sometimes be a huge time and brain saver.
Prejudice and bugs
There are some open questions with these types of AI coding tools. I’m a little worried that they’ll introduce and amplify the winner-takes-all dynamic: very few companies have the data (in this case, the billions of lines of code) to build such tools, so creating a competitor to Copilot will be a challenge.
And will Copilot itself be able to propose new and better ways to write code and build software? We have seen AI systems innovate before. On the other hand, Copilot may be limited to doing things the way we’ve always done them, as AI systems trained on past data tend to do.
My experiences with Copilot have also made me strongly aware that my expertise is still needed, to verify that the “proposed” code is really what I’m looking for.
Sometimes it’s trivial to see that Copilot has misunderstood my input. Those are the simple cases, and the tool makes it easy to ask for another suggestion.
The trickier cases are where the code looks good, but it may contain a subtle bug. The bug may be because this is AI code generation difficultor it could be that the billions of lines of human-written code Copilot is trained on contain bugs of their own.
Another concern is: possible problems about licensing and ownership of the code Copilot has been trained on. GitHub said yes try to deal with these problemsbut we’ll have to wait and see how it turns out.
More output from the same input
Sometimes I felt a little wistful about using Copilot. The skill I often think makes me at least one small amount special (my ability to write code and make things with computers) can be “automated away” as many other jobs have been at different times in human history.
However, I am not selling my laptop and am not yet running off to live a simple life in the bush. The human coder is still a crucial part of the system, but as a curator instead of a maker.
Of course you might think “that’s what a coder would say” … and maybe you’re right.
AI tools like Copilot, OpenAIs text generator GPT-3and Google’s Image text-to-image enginehave seen tremendous improvements in recent years.
Many in the “creative industries” that deal with text and graphics are beginning to struggle with their fear of being (at least partially) automated away. Copilot shows that some of us in the tech industry are in the same boat.
Still, I’m (cautiously) excited. Copilot is a force multiplier in the most optimistic tradition of tool building: it provides more leverage to increase usable output for the same amount of input.
These new tools and the new leverage they provide are embedded in wider systems of people, technology and environmental actors, and I’m really fascinated to see how these systems reconfigure themselves in response.
In the meantime, it can help save my brain juice for the hard parts of my coding work, which can only be a good thing.
This article by Ben SwiftTeam Leader Educational Experiences (Lead Lecturer), ANU School of Cybernetics, Australian National University has been reissued from The conversation under a Creative Commons license. Read the original article†