Alexa.NET.APL 1.6.4

Small helper library to allow Alexa.NET skills to work with APL

Install-Package Alexa.NET.APL -Version 1.6.4
dotnet add package Alexa.NET.APL --version 1.6.4
paket add Alexa.NET.APL --version 1.6.4
The NuGet Team does not provide support for this client. Please contact its maintainers for support.

Alexa.NET.APL

Small helper library to allow Alexa.NET skills to work with APL

Access to APL ViewPort Data within your skill

There are new Display and Viewport properties available within the request being sent to an APL enabled skill.
Rather than create a forced dependency for Alexa.NET - APL skills have an enhanced SkillRequest object with these new properties
Amazon information on Viewport information: https://developer.amazon.com/docs/alexa-presentation-language/apl-viewport-characteristics.html
Here's an example signature and opening line for a lambda function

public Task<SkillResponse> FunctionHandler(APLSkillRequest input, ILambdaContext context)
var shape = input.Context.Viewport?.Shape;

Creating a Layout Document

Alexa.NET.APL has a set of APL components so that creating layouts is entirely within the C# object model
All properties are of Type APLValue&amp;lt;T&amp;gt; - which allows you to specify an absolute value or an APL data binding expression for each property

new Layout(new[]
{
  new Container(
    new APLComponent[]{
      new Text("APL in C#"){FontSize = "24dp",TextAlign= "Center"},
      new Image("https://example.com/image.jpg"){Width = 400,Height=400}
    })
  }){Direction = "row"}
})

Sending a RenderDocument Directove

RenderDocument hooks into the same Alexa.NET directive mechanism that already exists, you just reference the layout, document token and any data sources you want to send along with it

            var shape = input.Context.Viewport?.Shape;
            var response = ResponseBuilder.Tell($"Your viewport is {shape.ToString() ?? "Non existent"}");

            var directive = new RenderDocumentDirective
            {
                Token = "randomToken",
                Document = new APLDocument
                {
                    MainTemplate = new Layout(new[]
                    {
                        new Container(new APLComponent[]{
                            new Text("APL in C#"){FontSize = "24dp",TextAlign= "Center"},
                            new Image("https://images.example.com/photos/2143/lights-party-dancing-music.jpg?cs=srgb&dl=cheerful-club-concert-2143.jpg&fm=jpg"){Width = 400,Height=400}
                        }){Direction = "row"}
                    })
                }
            };

            response.Response.Directives.Add(directive);

Receiving SendEvent Commands from your layout

Commands are supported within Alexa.NET.APL - to send events back from your layout to your skill you need the SendEvent Command:

https://developer.amazon.com/docs/alexa-presentation-language/apl-standard-commands.html#sendevent-command

var wrapper = new TouchWrapper
{
   OnPress = new SendEvent
   {
     Arguments = new Dictionary<string, object> {{"sent", true}}
   }
};

To receive these events you need to add support for the UserEventRequest, which can be placed in your Lambda constructor

new UserEventRequestHandler().AddToRequestConverter();

and then you treat them like any other request type, for example

if (input.Request is UserEventRequest userEvent)
{
    var token = userEvent.Token;
    var argument = userEvent.Arguments["sent"];
}

Obviously your user may interact with your skill through voice means, at which point you need to be able to send commands down to your layout. This is done with the ExecuteCommands directive

using Alexa.NET.APL.Commands;
...
var sendEvent = new ExecuteCommandsDirective("token",new []
{
    new SetPage
    {
        ComponentId="exampleId",
        Value=3
    }
});

Alexa.NET.APL

Small helper library to allow Alexa.NET skills to work with APL

Access to APL ViewPort Data within your skill

There are new Display and Viewport properties available within the request being sent to an APL enabled skill.
Rather than create a forced dependency for Alexa.NET - APL skills have an enhanced SkillRequest object with these new properties
Amazon information on Viewport information: https://developer.amazon.com/docs/alexa-presentation-language/apl-viewport-characteristics.html
Here's an example signature and opening line for a lambda function

public Task<SkillResponse> FunctionHandler(APLSkillRequest input, ILambdaContext context)
var shape = input.Context.Viewport?.Shape;

Creating a Layout Document

Alexa.NET.APL has a set of APL components so that creating layouts is entirely within the C# object model
All properties are of Type APLValue&amp;lt;T&amp;gt; - which allows you to specify an absolute value or an APL data binding expression for each property

new Layout(new[]
{
  new Container(
    new APLComponent[]{
      new Text("APL in C#"){FontSize = "24dp",TextAlign= "Center"},
      new Image("https://example.com/image.jpg"){Width = 400,Height=400}
    })
  }){Direction = "row"}
})

Sending a RenderDocument Directove

RenderDocument hooks into the same Alexa.NET directive mechanism that already exists, you just reference the layout, document token and any data sources you want to send along with it

            var shape = input.Context.Viewport?.Shape;
            var response = ResponseBuilder.Tell($"Your viewport is {shape.ToString() ?? "Non existent"}");

            var directive = new RenderDocumentDirective
            {
                Token = "randomToken",
                Document = new APLDocument
                {
                    MainTemplate = new Layout(new[]
                    {
                        new Container(new APLComponent[]{
                            new Text("APL in C#"){FontSize = "24dp",TextAlign= "Center"},
                            new Image("https://images.example.com/photos/2143/lights-party-dancing-music.jpg?cs=srgb&dl=cheerful-club-concert-2143.jpg&fm=jpg"){Width = 400,Height=400}
                        }){Direction = "row"}
                    })
                }
            };

            response.Response.Directives.Add(directive);

Receiving SendEvent Commands from your layout

Commands are supported within Alexa.NET.APL - to send events back from your layout to your skill you need the SendEvent Command:

https://developer.amazon.com/docs/alexa-presentation-language/apl-standard-commands.html#sendevent-command

var wrapper = new TouchWrapper
{
   OnPress = new SendEvent
   {
     Arguments = new Dictionary<string, object> {{"sent", true}}
   }
};

To receive these events you need to add support for the UserEventRequest, which can be placed in your Lambda constructor

new UserEventRequestHandler().AddToRequestConverter();

and then you treat them like any other request type, for example

if (input.Request is UserEventRequest userEvent)
{
    var token = userEvent.Token;
    var argument = userEvent.Arguments["sent"];
}

Obviously your user may interact with your skill through voice means, at which point you need to be able to send commands down to your layout. This is done with the ExecuteCommands directive

using Alexa.NET.APL.Commands;
...
var sendEvent = new ExecuteCommandsDirective("token",new []
{
    new SetPage
    {
        ComponentId="exampleId",
        Value=3
    }
});

Release Notes

Make transformers within ObjectDataSource optional
Make Data in Sequence control an APLValue<Dictionary<string,object>> to allow data binding
(thanks Martin!)

Version History

Version Downloads Last updated
1.6.4 21 12/7/2018
1.6.3 181 11/11/2018
1.6.1 41 11/11/2018
1.6.0 41 11/11/2018
1.5.0 83 11/6/2018
1.4.3 41 11/6/2018
1.4.2 38 11/6/2018
1.4.1 41 11/6/2018
1.4.0 59 11/5/2018
1.3.2 39 11/5/2018
1.2.2 41 11/5/2018
1.2.1 42 11/5/2018
1.2.0 41 11/5/2018
1.1.0 41 11/4/2018
1.0.0 61 11/2/2018