Natural Language Processing with Microsoft LUIS - How to build, train and publish your model (part 1 of 2)

Natural Language Processing with Microsoft LUIS - How to build, train and publish your model (part 1 of 2)

Philip Rashleigh

11 March 2022 - 10 min read

AICloud ComputingMachine LearningMicrosoft Azure
Natural Language Processing with Microsoft LUIS - How to build, train and publish your model (part 1 of 2)

Microsoft LUIS is a part of Microsoft Azure — Microsoft’s cloud hosting platform.

Azure has many, pre-built services for hosting web applications, databases, application gateways, and many other types of resource. 

Within Azure, there is a subset of resources which make up Azure Cognitive Services — a set of tools that allows users to easily build and deploy artificial intelligence models as RESTful APIs. 

Microsoft LUIS sits within Azure Cognitive Services. LUIS is a platform for natural language processing and can be used to create chatbots, virtual assistants, IoT experiences and other artificial intelligence services.

In this two-part series, Audacia’s Technical Director, Philip Rashleigh, will give an overview of how Microsoft LUIS can be used to create a natural language processing model and deploy it as a RESTful web API that can be consumed via a web application.

Understanding Utterances

The basis of natural language processing in LUIS is ‘utterances’. An utterance is a statement someone might say or a question they might ask.

To understand utterances, consider a virtual assistant like Siri, Amazon Echo or Google Home. There are a number of phrases that could be used to request that your virtual assistant play your favourite song, for example:

  • “Play MMMBop by Hanson”
  • “Play the song MMMBop”
  • “Play MMMBop”
  • “Play me a song by Hanson”

We can break these utterances down into the component parts of:

  1. Intent: what is the intended action that should result from an utterance - i.e.:

    a. Intent: “Play Song”

  2. Entities: what entities (nouns) are associated with the utterance - i.e.:

    a. Artist: “Hanson”

    b. Song: “Mmmbop”

If we revisit our above utterances and colour code these components, you can see how a suitable AI might be able to extract the intent and related entities:

  • "Play MMMBop by Hanson
  • "Play the song MMMBop"
  • "Play MMMBop"
  • "Play me a song by Hanson"

The LUIS AI technology is designed to do exactly that — take a user's utterance, and extract the intent, and the related entities, from that utterance.

Azure Resources

In order to use LUIS you will need an Azure Subscription.

LUIS uses two types of Azure Resource:

  1. Authoring Resource - where the model is created, configured, built and trained.
  2. Prediction Resource - the endpoint to which the model will be published once it has been trained. Once a model has been published to a prediction resource it can be queried as a RESTful API.

Whilst Azure has UK West and UK South regions, LUIS authoring resources are currently limited to a subset of Azure’s regions, and are not (yet) supported in either UK region. This is worth bearing in mind as if, for data protection or security reasons, all your application data must be hosted within the UK, this can not be achieved out of the box.

Azure’s “West Europe” region is hosted in Amsterdam and supports LUIS - so should suffice if the data hosting requirement is “in Europe”.

For further information about Azure’s region support for LUIS please consult the Microsoft documentation.

Configuring LUIS

We’ll be using LUIS to build a model that will allow us to understand requests to book meeting and break-out rooms.

To get started, head over to https://luis.ai and log in with the same Microsoft Account this is associated with your Azure subscription. LUIS will then guide you through the process of setting up an Authoring Resource:

Once we’ve created our authoring resource we create a new LUIS app by selecting the “New App” button:

“Prediction Resource” can be left blank for now (we will return to this when we publish our model).

Once your app has been created, Microsoft will present you with a “How to create an effective LUIS app” guide which makes useful reading.

Once you have dismissed the guide you will be greeted by the “Intents” screen.

In order to allow for meeting room booking requests we’re going to create two intents:

  1. CheckAvailability - which we will use to ask whether a room is available at a certain time.
  2. BookRoom - which we will use to request an actual room booking.

To create the intents click the “Create” button on the “Intents” screen.

Each of these intents is going to have a few related utterances against them however we will leave these for now as we want to set up our entities first.

Prebuilt Entities 

Because there are some really common use cases for entities, there are some prebuilt entities within LUIS (the full list of which can be found here). These prebuilt entities are optimised to extract information such as names, ages, geographic locations, dates, times, and many more. 

Navigate to the “Entities” screen then click the ‘Add prebuilt entity’ option to add the below entities: 

  • datetimeV2

    - Getting a date is a common use case in language processing and there are many different ways of expressing a point in time or a range of time (for example, you could say “the 23rd of September 2021, at 3pm”; “next Tuesday at five”; “a year ago today”; “last Wednesday between 4pm and 5pm”). datetimeV2 worries about this so you don’t have to.

    - We’ll be using datetimeV2 in order to see whether a room is available at a chosen time and to place bookings at particular times.

  • personName

    - This entity is particularly good at spotting where someone’s name is in text.

    - We’ll be using personName to extract out meeting attendees, for example we could interpret, “book a meeting with Philip White”.

Lists

We’ll also be using a list entity. Lists can be used when there is a finite set of options. In this case, we’re going to use a list entity to extract which room we’d like to book.

In Audacia’s Leeds office, there are three meeting rooms; the boardroom and two “pods”, which are smaller breakout rooms. 

To create a list, from the “Entities” screen select “Create”, then “List”. Name the list “Room”.

When creating a list entity, you can then specify a set of options in this case we will be adding:

  • board room
  • pod 1
  • pod 2

Synonyms can also be added to account for both aliases, and nuances/mis-spelling in utterances. For example we may want “big room” and “boardroom” to be synonyms of “board room”. Similarly, with pod 1 you might say “pod1” without a space, or “first pod”.

Additional list items and synonyms can also be added at a later point as required. 

Machine Learned Entity

In addition to the room list and our two prebuilt entities, we’ll also be using a machine learned entity. As with a list, our machine learned entity needs a name; for this one, we’ll use the name “BookingTitle”. We’re using a machine-learned entity here as there is not a pre-defined set of possible meeting names, instead we’d like our user to be able to call the meeting anything they would like.

LUIS will use AI to extract meeting names and we’ll need to train it to help spot what a meeting name is.

Intent Utterances

With our entities now set up, we’re going to return to the “Intents” screen and add some utterances. 

We’ll start with the CheckAvailability intent and add some example utterances that may be used, such as “is pod one free in two hours”.

After adding an utterance, you will see LUIS automatically begin to match parts of the text to the entities we added:

LUIS has deduced that “in two hours”, refers to a point in time, and that “pod 1” is the room that we’re interested in.

Other ways we could request availability is with questions like “is pod two free”, which would essentially mean: “is it free right now?”

Further examples could include; “is the first pod free between two and two and three today?” or “can I have a meeting in the boardroom?

The more examples we can give of the kind of questions that someone might ask, the better we can train our model and the more accurate our results should be.

Now let’s add some utterances to our BookRoom intent.

To book a room you might say something like “add a meeting with Phil White in the boardroom today at 4pm titled secret meeting”.

After receiving this request, LUIS extracts some entities, it understands that:

  • Philip White is a person
  • The room we’re interested in is the boardroom
  • The time we want is today at 4pm

As you can see, however, it does not pull out the BookingTitle entity we created. This is because LUIS has no knowledge of which bit of this text we want assigned to BookingTitle and we need to train it in what a BookingTitle looks like.

To do this, select the text “secret meeting” and confirm that this is a BookingTitle:

Continue to add further examples of the sort of utterances you expect, and highlight and identify “BookingTitle” where required:

Above are some further examples that might be used. The hope is that, over time, the LUIS AI will begin to recognise how to spot a BookingTitle.

As with AI in general, the more example data you can provide to train the model, the better the predictive outcome will be.

Training the Model

To train your model, select “Train” in the top right-hand corner. Note that, depending on the size of your model, this process can take a very long time. For example if there are hundreds of intents with 1000s of utterances as examples.

This demo is pretty quick and simple, which means that building a middle with the few intents and utterances we have only takes 30 seconds or so.

Testing the Model

Once you’ve trained your model you can start testing it by clicking “Test” in the top right-hand corner and typing in a new utterance.

As you can see, our model has identified that we were most likely intending to book a room (rather than checking availability) and has correctly extracted the room, attendee and date/time. You can see a score against the intent. This represents how sure (out of 1) the model is that this was the intent we wanted.

The more realistic example utterances we supply for each intent, the better our model should get at predicting our intent (once it has been re-trained on the new data).

Publishing the Model

When we’re happy with our model we can publish it to a prediction resource in Azure, so that we can start accessing it via a RESTful API.

Select “Publish” in the top right-hand corner and select either “Staging Slot” or “Production Slot” (we’d suggest using a staging slot so that the model isn’t immediately in a production state).

Once the model is published, click “Access your endpoint Urls”.

From the “Azure Resources” screen select “Add prediction resources” and add a new prediction resource in the same manner that you did an authoring resource.

You now have an Endpoint URL we can use to direct our query, keys for authentication, and even an example of how to perform a query.

Please note that we’ve redacted our LUIS keys and app ID in the above screenshot.

In Part 2 we’ll look at how to consume our newly published RESTful API (both from Postman, and via a custom Vue.js app), and how to improve the performance of our model over time.

Audacia is a software development company based in the UK, headquartered in Leeds. View more technical insights from our teams of consultants, business analysts, developers and testers on our technology insights blog.

Technology Insights
Ebook Available

How to maximise the performance of your existing systems

Free download

Philip Rashleigh served as the Technical Director at Audacia from 2010-2023. During his tenure, he was responsible for the overall technical strategy and infrastructure, deciding the most appropriate tools and technologies for mission-critical software projects. Philip also played a key role in engineer recruitment, as well as overseeing infrastructure and information security.