How I Built a League of Legends Voice Assistant with Amazon Alexa

Marty Jacobs · Jul 19, 2017 00:00 · 987 words · 5 minute read


The Echo device connects to the “Alexa” platform, which is where all the intelligence is located and the ability to connect to League of Legends. After a command has been spoken, Alexa then recognises what has been said, understand its intent, then responds back through the Echo device’s speakers with an output. The Alexa platform uses Machine Learning / Natural-Language-Processing (NLP) techniques to understand the words it receives. This post won’t cover ML or NLP, but rather a practical guide to programming the voice assistant (Alexa) and trialing a real-time data processing solution in a gaming scenario.

The plan?

  • Build a custom Alexa skill using the Alexa Skills kit SDK
  • Integrate data received from the League of Legends official API
  • Deploy to the Echo device
  • Live demo
  • Ideas on future expansion

Alexa, let's play a game

Whether you’re a casual gamer or a fully fledged PC gamer, you’ll surely see some considerable value in this. What we plan to do is demonstrate a real-time gaming assistant to guide us through playing League of Legends. We want to show you a one-size-fits-all way to build on top of the Alexa platform so you can innovate your own solutions (if you’re not so much into gaming). We chose League of Legends because the creators have provided an official application programming interface (API) that we can use to receive live game data while we’re actually playing.

For those who don’t know what an API is… It is a set of functions that have been made discoverable and intended to be utilised by developers. For example, an “add” service is made discoverable, the developer then provides two numbers “1” and “2”, then the function returns the result of “3”. Key point - we don’t know how the function was implemented, only the information provided. This is the same with the live game data. We won’t know how they implemented their functions to get the live game data… because the implementation is behind the API.

Alt Text!

Easy-mode

Amazon have made it pretty darn easy to build on top of the Alexa platform. Yet, we still want to show you how we did it (it might save you the time). Oh yeah… we’ll also assume that you have signed up for an Amazon developer account (free of charge).

Simple steps:

  1. Select a region for your account - we selected US East (N. Virginia), as we ran into deployment problems using the Asia Pacific region.
  2. Go to AWS Lambda service and select ‘Functions’ > ‘Create Lambda Function’ > ‘Blank Function.
  3. Click on the empty trigger left of the Lambda symbol, and select Amazon Skills Kit.
  4. Select your run-time environment (we will use Node.js), select function name, select role as ‘lambda_basic_execution’.
  5. Select create function and voilà, you now have a bare-bones skill setup on AWS Lambda.

Let’s quickly side-step, AWS Lambda is a Function-as-a-Service (FaaS) platform that allows developers to directly upload their code. Why is it good? We don’t have to worry about infrastructure. Yep that’s right, gone are the days of breaking your bank and worrying about wasted compute. You are charged based entirely on function calls, and guess what… AWS gives away 1 million free function calls with every developer account.

Now the fun part…

  1. Go to the ‘Code’ section of your Lambda function and upload a ZIP file containing your Alexa skill, or copy-paste it directly in the editor. Here is a great sample Alexa skill to start building from.
  2. To test it, go to the Alexa getting started page here and select ‘Get started’
  3. Select add a new skill, fill out the name you would like for your skill, and what name you will use to invoke it.
  4. Next, you will need to upload an intent schema and sample utterances.

What is an intent schema? It is just a list of all the custom functions that you have created. For example, we have a function getCurrentGameInfoIntent which calls the LoL API and returns the current game information. The intent schema is in JSON format.

What is sample utterances? They are the words spoken to trigger the functions that we have created. For example, a sample utterance for getCurrentGameInfoIntent might be “getCurrentGameInfoIntent what is currently going on”. Take note: The intent is always listed in the first position of the statement.

Cool! Almost there…

  1. Hit next, and we now deploy the endpoint used to call the Alexa skill.
  2. Go back to your AWS Lambda function, and in the top right-hand corner there should be in bold letters ARN. Copy the arn, for example.. “arn:aws:lambda:us-east-1:23434…”
  3. Now paste it into the Alexa skill endpoint just here…

Alt Text!


12. Now you can test your Alexa skill by using the Service Simulator. Invoke your skill my using the invocation name you chose. 13. Test out if your Lambda function is being called correctly by entering in an utterance phrase, for example “what is going on in-game right now?”

Next level

This is just one simple use case showing the power of voice technology. We believe it adds an extra layer to the innovation process. Typically these voice devices are used for home automation (eg. turning off the lights, or cooking the toast), but they extend so much further as shown above. We selected the Echo device by its sheer popularity in the Software community, and we weren’t dissapointed to say the least!

Alexa, Exit

The age of voice is very real. Businesses and consumers are adopting voice technology to solve their problems, and making their lives much easier. We think it advances the automation industry tenfold. Why is it so good? Because it allows us to move faster, find out more and pushes us closer to ubiquitous computing in everyday life. The use cases appear to be endless and seem to be only limited by our imaginations. We’re pretty stoked with our purchase and we will be putting it good use! Til’ next time.

Alexa, say goodbye…