Enlarge / Developers working in the "pit" at the MGM Grand's Garden Arena in Las Vegas during Amazon's re:Invent conference to prepare their DeepRacers for competition.Jason Levitt

Sadly, there's one tech toy that Amazon won't be able to sell you for Christmas this year. DeepRacer is an autonomous 1/18th scale race car that was unveiled at Amazon re:Invent in November. But it won't be available until March 2019 at the soonest, so all you can do now is pre-order it on Amazon. It's too bad we'll have to wait, because this car could help developers understand reinforcement learning, a type of machine learning commonly associated with self-driving cars, and it should entertain hackers of all ages.

DeepRacer is really a full-blown Linux computer with wheels, running on an Intel Atom processor with 4GB of RAM. A closer look at its guts reveals that the car is essentially a modification of DeepLens, the video camera and computer combination released at re:Invent last year. For the new product, DeepLens has been set on wheels and seen the addition of an extra battery and some other bells and whistles. The earlier product has proven to be a popular learning tool for neural networks, but DeepRacer has an added bonus: competition.

The AWS DeepRacer League, Amazon's competition system for DeepRacer developers, will culminate with a championship each year at re:Invent. At the recent re:Invent conference, there were barely 24 hours allotted for developers to attempt to program DeepRacer cars and compete—but we were there to check out the action. DeepRacer cars, AWS accounts, and the entire MGM Grand Arena were pimped out to help developers create and test models. Participants could quickly get up to speed using two labs provided in Amazon's Github account.

Training Wheels

  • The Reward Function, a Python function used to train DeepRacer to navigate around the track. Jason Levitt
  • The variables that can be tweaked within DeepRacer's reward system to keep the car on the track.
  • After modifying the Python code, you then simply press the training button, and a virtual machine is launched to run your code and automatically train your model.
  • After training, you can use the AWS Management Console to evaluate your model.

Though DeepRacer cars will be used at the finals each year, you don't need to shell out $399 for one. You can do all of the training and evaluation of your reinforcement learning models online, via the AWS Management Console using Amazon's SageMaker service. Sign-ups for the DeepRacer developer preview are here, and hopefully you can be ready well before the physical cars are shipping in March.

Winning the competition revolves around enhancement of a rudimentary reward function that is provided in Python in the AWS Management Console. The reward function is used to train your model to keep your DeepRacer car on the road. There are numerous variables you can use to enhance the function, and most of your development time will be spent trying to figure out how to use them in such a way that your model keeps the DeepRacer car moving quickly on the race track. After modifying the Python code, you then simply press the training button, and a virtual machine is launched to run your code and automatically train your model.

Typically, you'll want to allow at least 20 minutes for training. The AWS Management console provides visual feedback of how things are progressing in the form of a graph and a video.

After training, the next step is to download your model from the AWS Management Console to your laptop and transfer it via Wi-Fi to your DeepRacer vehicle. Since we won't be seeing those until March 2019, you can simply have the AWS Management Console evaluate your model. In my case, it ran three times, producing data on how long it ran and what percentage of the racetrack it covered before going off into the weeds.

The Racetrack

  • The racing experience at re:Invent.
  • The practice area at MGM Grand's Garden Arena.
  • Watching races in progress at Amazon's re:Invent conference in November.

Should you get lucky enough to obtain a physical car (there were a handful available on Ebay at publication time), you can control the car via a Web app that you can run using your cell phone via Wi-Fi. After downloading your model to your car, it will be used to control the car and attempt to keep it on the racetrack using a DeepLens video camera to "see" the lines on the road.

Note that there is also a manual mode on the Web app just in case you want to take the car for a spin without having your model in control.

This year's finals winner was Rick Fish, who managed to get around the track in about 52 seconds. There's little doubt that next year's winner will cut that time in half or better.

Jason Levitt is a former InformationWeek technology editor, a former Yahoo technical evangelist, and a current technology consultant based in Austin, Texas.

Original Article

[contf] [contfnew]

Ars Technica

[contfnewc] [contfnewc]