Xamarin: Adding Facial Recognition to Your Mobile Apps

Whether it’s automatically tagging friends in our Facebook photos or “live filters” on Snapchat, facial recognition is increasingly becoming a large part of our everyday lives. Adding facial recognition to software in the past was extremely challenging, however. Now, though, Microsoft Cognitive Services makes it as easy as a few lines of code—for free. Whether […]

The post Adding Facial Recognition to Your Mobile Apps appeared first on Xamarin Blog.

Details

Xamarin: Making Your iOS Apps IPv6 Ready

On June 1, Apple started enforcing a new policy whereby all iOS applications must support IPv6-only network services in iOS 9. While Apple states that most apps will not need to be changed or updated, your app may be using a few libraries that need to be updated before you submit your next update. Issues […]

The post Making Your iOS Apps IPv6 Ready appeared first on Xamarin Blog.

Details

Wallace B. (Wally) McClure: Golfing for the Peyback Foundation and Children’s Hospital – Results

The Results of the Morning and Afternoon Rounds:

The Morning Results

The Afternoon Results

We had a long and winding day on Monday.  We went and scored the Morning and Afternoon portions of the Children’s Hospital – Peyback Foundation Charity Golf Tournament.  Each portion was flighted.  Pictures were taken of each team while on the course.  The pictures were immediately uploaded to the scoring system for display on the scoreboard.  The flighting of each portion was performed by the application.  As each portion finished, the teams were placed into each flight.  Each team’s scores were shown in the flight as well as the team pictures.

The scoring system is hosted in Azure.  The picture upload is done via an iPhone application written in C#/Xamarin.

I learned a few new things that I will work on and resolve the next time around.

Details

Xamarin: Podcast: WWDC 2016 Recap

This week on the Xamarin Podcast, Pierce and I cover everything you need to know as a mobile developer from Apple’s WWDC 2016, including new additions and upgrades to iOS, macOS, watchOS, and tvOS. Subscribe or Download Today Knowing the latest in .NET, C#, and Xamarin is easier than ever with the Xamarin Podcast! The Xamarin […]

The post Podcast: WWDC 2016 Recap appeared first on Xamarin Blog.

Details

Greg Shackles: Alexa, Is My Infrastructure on Fire?

I recently broke down and purchased an Amazon Echo after hearing enough good things about it, and also seeing how straightforward it looked to develop for it. It’s no secret that I’m a big fan of Datadog, so naturally I felt like I needed to mix the two. I’ve previously covered exposing Datadog metrics through Hubot, so I figured I’d try to do something similar for the Echo.

I decided to create and host the skill through an AWS Lambda function which made it really easy to get started and deploy. There’s plenty of documentation around on creating skills in Lambda so I won’t really get into that part here. I also went with the Serverless framework to simplify the development and deployment processes, but that’s not actually too important to the implementation here. Ultimately it’s just a simple Lambda function tied to an Alexa skill.

At present, it exposes the current CPU levels of any hosts in your account. For example, saying:

Alexa, ask Datadog to check the CPU

will result in a response along the lines of:

Here are the current CPU loads. Gregs MacBook Pro is at 7%. Gregs iMac is at 4%

I think that’s pretty awesome, so let’s take a look at how to implement it.

Defining the Interaction Model

First we need to definte the skill’s interaction model in Amazon’s developer console.

Intent Schema

The intent schema is the primary manifest of what your skill can do, and how users will interact with it. For this skill we’ll keep it simple and just expose a single intent for querying:

{
  "intents": [
    {
      "intent": "QueryIntent",
      "slots": [
        {
          "name": "Query",
          "type": "QUERY_LIST"
        }
      ]
    }
  ]
}

Eventually it would be great to build this out further and make the skill more conversational and interesting, but this is a sufficient starting point.

Custom Slot Types

In the intent schema you may have noticed the QUERY_LIST type, so now we need to actually define that. This is a custom slot that defines a list of the types of queries we can do. For now it will just contain a single value:

Type Values
QUERY_LIST cpu

This provides a nice place to expose more formal query types as the skill gets extended.

Sample Utterances

Finally, we need to give Amazon a list of sample utterances for the skill in order to teach it how each intent can be interacted with. We’ll give it a few different ways to be invoked:

QueryIntent query {Query}  
QueryIntent check {Query}  
QueryIntent to query {Query}  
QueryIntent to check {Query}  
QueryIntent to query the {Query}  
QueryIntent to check the {Query}  

Implementing the Skill

With all that configuration out of the way, let’s look at the code involved in implementing the skill. Just like in that Hubot plugin I created, we’ll leverage the dogapi package to query the Datadog API. I’ll only include the interesting bits in this post, but the full sample can be found on GitHub.

Talking to Datadog

First, let’s build out a function to query CPU values from Datadog:

import dogapi from 'dogapi';  
import Promise from 'bluebird';

const queryDatadog = Promise.promisify(dogapi.metric.query);

function queryCPU() {  
  const now = parseInt(new Date().getTime() / 1000);
  const then = now - 300;
  const query = 'system.cpu.user{*}by{host}';

  return queryDatadog(then, now, query)
    .then(res => res.series.map(reading => ({
      name: reading.scope
                   .replace(/^host:/i, '')
                   .replace(/(..*$)/i, '')
                   .replace(/W/g, ' '),
      value: reading.pointlist[reading.pointlist.length - 1][1]
    })));
}

Here I’m making use of bluebird, which is a great Promise library that comes with a lot of useful functionality, on top of being very performant. I definitely recommend using this as a replacement for native Promises when working with AWS Lambda functions, as it performs much better and has a significantly lower memory footprint.

There’s not too much to the implementation here. It goes out to Datadog, grabs the latest CPU reading for each host, and then does a little processing on the host name to make it more speech-friendly.

Processing the Intent

When a request comes in for the QueryIntent we defined earlier in the schema, we’ll need to process that. Here’s an example of the type of data that will come in with our intent:

{
  "session": {
    "sessionId": "SessionId.908e5538-9a5e-4201-b20b-0ed7cc6761bb",
    "application": {
      "applicationId": "amzn1.echo-sdk-ams.app.a5cc355a-042d-4fdc-aabe-afe711657217"
    },
    "attributes": {},
    "user": {
      "userId": "amzn1.ask.account.AFP3ZWPOS2BGJR7OWJZ3DHPKMOMNWY4AY66FUR7ILBWANIHQN73QG6UY2L643DAVVTC3PB2PVHFZK5MHTXAE2T2FZOUVC7KVMZIYIB7YBARDE3AUU6WBMM7AYTZBFPK5NSXAIC5KJIVRZNGIRYPZCP2A4XPVFVI3JF3ZU5PKQ3PJDBTPKTNS7WI23SDK4ISXWOXDHMMLQ5FLLTI"
    },
    "new": true
  },
  "request": {
    "type": "IntentRequest",
    "requestId": "EdwRequestId.f01bc54b-6d75-4354-a478-08ec5b3cfed1",
    "timestamp": "2016-06-20T00:11:14Z",
    "intent": {
      "name": "QueryIntent",
      "slots": {
        "Query": {
          "name": "Query",
          "value": "CPU"
        }
      }
    },
    "locale": "en-US"
  },
  "version": "1.0"
}

Based on that, we can easily implement a function to pull the query value out of the intent and sent it over to Datadog:

function processIntent(intentRequest, session) {  
  const intent = intentRequest.intent;

  if (intent.name === 'QueryIntent') {
    const querySlot = intent.slots.Query;

    if (querySlot.value && querySlot.value.toLowerCase() === 'cpu') {
      return queryCPU().then(readings => {
        const hostSpeechFragments = readings.map(reading =>
          `${reading.name} is at ${reading.value}%`).join('. ');
        const speechOutput = `Here are the current CPU loads. ${hostSpeechFragments}`;

        return buildSpeechletResponse(
          'CPU Load', 
          speechOutput,
          null, 
          true);
      });
    }
  }

  return Promise.resolve(buildSpeechletResponse(
    'Datadog Query',
    'Sorry, I don't know that query',
    null,
    true
  ));
}

Most of that code is around validation and parsing. Once it gets a list of CPU readings it turns them into something readable and forms a spoken response based on them. The buildSpeechletResponse function referenced here is a simple helper method that formats things the way the Alexa API expects them. The code for that method can be found in the helpers file. If we get a query value other than CPU we simply respond saying that we don’t understand that query.

The true at the end of the buildSpeechletResponse signature denotes that each response will end the session with the user. In a more interesting implementation you can imagine keeping the session open and making things more conversational, but for now we’ll keep things as a single operation.

The Handler

Finally, we need to tie it all together and process the incoming request to our Lambda function:

module.exports.handler = function(event, context, callback) {  
  dogapi.initialize({
    api_key: process.env.DATADOG_API_KEY,
    app_key: process.env.DATADOG_APP_KEY
  });

  if (event.request.type === 'IntentRequest') {
    processIntent(event.request, event.session)
      .then(speechletResponse =>
        context.succeed(buildResponse({}, speechletResponse)));
  }
};

When a request comes in, we initialize dogapi with our API and app keys, and process the intent. You can specify your own keys by adding them as Serverless variables, such as through the _meta/variables/s-variables-dev.json file, in this format:

{
  "datadogApiKey": "your-api-key-here",
  "datadogAppKey": "your-app-key-here"
}

That’s it! The full source for this sample is available on GitHub. It may look like a lot but really it’s very simple to set up Alexa skills, especially when you use AWS Lambda to define them. With just a few lines of code and configuration you can add interactive speech-driven APIs to anything.

Alexa, is that cool or what?

Details