Matt Ward: TypeScript Support in Xamarin Studio

Xamarin Studio and MonoDevelop now have support for TypeScript on Linux, Mac and Windows with an alpha release of the TypeScript Addin.

Editing TypeScript in Xamarin Studio on the Mac

The TypeScript addin uses V8.NET which is a library that allows a .NET application to host Google’s V8 JavaScript engine and have JavaScript interact with .NET objects in the host application.

The ability to support Windows, Mac and Linux would not have been possible without the work done by James Wilkins and Christian Bernasko. James Wilkins created the V8.NET library and when it was first released it supported only Windows. Christian Bernasko then took V8.NET and modified it to make it work with Mono on Linux and the Mac. The TypeScript addin is using V8.NET binaries built by Christian from his port of V8.NET.

Please note that this is an alpha release and because V8.NET uses a native library it can cause Xamarin Studio or MonoDevelop to terminate if a bug is encountered.

Features

  • TypeScript compilation on save or build.
  • Code completion.
  • Find references.
  • Rename refactoring.
  • Go to declaration.
  • Errors highlighted as you type.
  • Code folding.

The addin supports:

  • Xamarin Studio MonoDevelop 5 and above.
  • TypeScript 1.4
  • Linux, Mac and Windows.

Installing the addin

The addin is currently available from MonoDevelop’s Add-in Repository in the alpha channel. By default the alpha repository is not enabled so you will have to enable it before you can find and install the addin.

In Xamarin Studio open the Add-in Manager and select the Gallery tab. Click the repository drop down and if Xamarin Studio Add-in Repository (Alpha Channel) is not displayed then click Manage Repositories…. In the window that opens tick the check box next to Xamarin Studio Add-in Repository (Alpha Channel) and then click the Close button.

Enabling alpha channel addins

Back in the Add-in Manager dialog click the Refresh button to update the list of addins. Use the search text box in the top right hand corner of the dialog to search for the addin by typing in TypeScript.

TypeScript addin selected in Addin Manager dialog

Select the TypeScript addin and then click the Install… button.

Note that if you are using Linux 32 bit then you should install the TypeScript Linux 32 bit addin. The other TypeScript addin listed supports Linux 64 bit. Hopefully in the future it will be possible to support both Linux 32 bit and 64 bit using the same addin.

Getting Started

Now that the TypeScript addin is installed let us create a TypeScript file.

To add a TypeScript file open the New File dialog, select the Web category and select Empty TypeScript file.

New File Dialog - New TypeScript File

Give the file a name and click the New button.

Note that currently the TypeScript file needs to be included in a project. Standalone TypeScript project files are not supported. TypeScript files can be added to any .NET project.

Code Completion

When editing the TypeScript code you will have code completion when you press the dot character.

TypeScript dot code completion

Code completion also works when you type the opening bracket of a function.

TypeScript method completion

Go to Declaration

The text editor’s right click menu has three TypeScript menus: Go to Declaration, Find References and Rename.

Text editor context menu with TypeScript menu options

The Go To Declaration menu option will open the corresponding definition in the text editor.

Find References

Find References will show the references in the Search Results window.

TypeScript references shown in Search Results window

Rename

Selecting the Rename menu option in the text editor will open the Rename dialog where you can type in a new name and click OK to have it updated.

TypeScript rename dialog

Note that currently on Linux the Rename dialog will only be displayed if the keyboard shortcut F2 is used. Selecting the context menu will not show the Rename dialog on Linux but will work on Windows and on the Mac.

Error Highlighting

Errors in your TypeScript code will be highlighted as you are typing in the text editor.

TypeScript errors highlighted in text editor

Code Folding

Code folding is supported for TypeScript classes, modules and interfaces.

TypeScript code folding

Code folding by default is disabled. To enable code folding open the Preferences dialog and in the Text Editor section select the General category, then tick the Enable code folding check box.

Preferences - Enabling code folding

Compiling to JavaScript

By default the TypeScript files will be compiled to JavaScript when the project is compiled.

There are more compiler options available in the project options in the Build – TypeScript category.

TypeScript compiler options for the project

On this page you can change when the compiler is run and what options are passed to the compiler when generating JavaScript code.

If an Output file is specified then all the TypeScript files will be compiled into a single JavaScript file. If an Output directory is specified then the JavaScript files will be generated in that directory instead of next to the TypeScript files.

That is the end of our quick look at TypeScript support in Xamarin Studio and MonoDevelop.

Source Code

The source code for the addin and for the V8.NET engine that works on Mono are available on GitHub.

XHackers Team: Wearables Day

Hot buzzword of the day is Wearables. We at XHackers are ready to create a buzz. Lot of us when we hear the word – Wearables, we think of Apple Watch, Android Wear, Microsoft Band, Google Glass or even Fitbit. But history of wearables dates back to 1961!

XHackers

In 1961, a MIT Professor, Edward Thorp, whom we call the Father of Wearables, created and successfully used a first wearable computer to cheat at Roulette which gave them 44% edge over the game 🙂 Since then, we had calculator watches (how many of you remember Casio watches 😉 ), digital hearing aids, Nike+, Go Pros, Fitbits and similar clones.

And then one day, Google announced “Project Glass” with a mission statement –

We think technology should work for you – to be there when you need it and get out of your way when you don’t.

It was exciting! With the annoucement of GDK(Glass Developer Kit), Android developers could write native Google glass apps using the Android SDK. This opened up plethora of opportunities for developers into the Wearable Computing market. Parallely, came slew of watches powered by Android called Android Wear. If you didn’t know – Xamarin has been supporting Google Glass & Android Wears ever since. Some exciting news about new watches are making rounds.. watch out! (pun intended).

Microsoft too joined the party by annoucing a cool looking wrist band called Microsoft Band. To our surprise, it came in with full support on all the leading phone Operating Systems like iOS, Android along with it’s very own Windows. With the release of Band SDK for all platforms and Xamarin’s same day support, it’s now seamless to integrate with iOS and Android apps. What we hear is, very soon Cortona which used to work only on Windows Phone, will now work on iOS and Android too – which will open up more avenues for apps to integrate voice in their apps.
XHackers

Apple Watch was one of the most exciting annoucements from Apple! As you may know, WatchKit has been in preview for quite some time now. Recently after WatchKit’s official release, and the actual Watch yet to hit the Apple stores, nothing stops developers to make their apps ready for the D-Day.

With Xamarin platform, it’s now a reality for C# developers to write a cross platform code across all the major wearable platforms. Not just write code for Apple Watch, Google Glass, Android Wears, or Microsoft Band but also share a good amount of code among them.

So are you excited to learn how to build your wearable apps on all these platforms in C#?

XHackers

Here’s your opportunity to peek into the wearable app development world. Come and learn more about Xamarin and how to program for Wearables in our upcoming meetup.

RSVP Now

What we plan to cover –

  • 09:45 AM – 10:15 AM : Quick introduction to Xamarin, Xamarin Forms – Pooran
  • 10:15 AM – 10:45 AM : Getting started with Microsoft Band –Vidyasagar
  • 10:45 AM – 11 AM : Break
  • 11 AM – 11:45 AM : Apple Watch concepts –Pooran
  • 11:45 AM – 12:30 PM : Android Wear concepts – Vidyasagar

See you there!

Blog Credits : Pooran

Cheers
Xhackers Core Team
[email protected]

Johan Karlsson: The Linker – Mono’s virtual 400 HP code chainsaw

One behind the scenes tool that most Xamarin newbies don’t know nothing about is the linker. The linker gets called during the build of your assemblies. It has a single purpose and that is to reduce the size of your assembly. So how does it do that you say! It does it by firing up a digital, virtual, 400-HP chainsaw and cuts away the parts that your code doesn’t use.

GREAT! How do I enable it?!

For iOS and Android the linker is enabled by default for projects that targets actual devices and disabled if you target emulators/simulators. The reason for this is to reduce build time when deploying to a simulator.

You can edit the linker settings under project properties; iOS build for iOS and Android options for Android. 

Anything else I should know

Yes, there are three levels of linking;
  • Link all assemblies which means that all code is subject for linking
  • Link SDK assemblies only which means that only Xamarin Core assemblies will be linked. Default for deploy to actual devices.
  • Dont link which means, well, don’t link… Default for deploy to simulators/emulators.

Outstanding, why don’t I use Link all all the time then!?

The first reason is that deploy time increases since linking takes time. So when deploying to the simulator or for your device while testing, it simply is not worth the extra time.
The other more important reason that should’ve been first is that the linker can be slightly evil. It can remove stuff that you meant to keep. Linking is carried out through static analysis of the code, so any classes that are instantiated through reflection and sometimes through IoC will not be detected and so they will be cut away. You can save the day by using the [Preserve] attribute to decorate classes and tell them to hide from the linker. If you’re coding in a PCL that doesn’t have the PreserveAttribute references you can just roll your own. Simply call it “PreserveAttribute” and the linker will see it. Think of it as garlic for linker vampires…
The third reason not to use link all is that this might affect any third party libraries that you have referenced that isn’t “linker friendly”.

So what’s the summary of all this

Simply leave the linker as is and carry on with your life. Nothing to see here, circulate!

References

Gone Mobile: Episode 25: Performance Comparisons: Part Two with Harry Cheung

Performance is a huge and important topic, so one episode just wasn’t enough. In this episode we talk to Harry Cheung about the performance tests he’s been running to see just how all these different mobile app development approaches perform when it comes to raw computation.

Hosts: Greg Shackles, Jon Dick

Guest: Harry Cheung

Links:

Thanks to our Sponsors!

Raygun.io

Raygun.io – Exceptional Error Tracking
Raygun.io is the fastest and easiest way to track your application’s errors and get the level of detail you need to fix crashes quickly. Notifications are delivered right to your inbox and presented on a beautiful dashboard.

Johan Karlsson: Connecting to Android Player using VS and Parallels

This is a short guide for how to connect to Android Player in case you’re using Visual Studio in Windows through Parallels. I use to do this in a way more complicated manor before I realized that it’s just this simple. Looking ahead, In VS 2015, Microsofts gives us an x86/hyper-V Android emulator that looks great. But for now, this works the best.

Start your engines

Fire up Android Player (or any other emulator of your choice that runs Android) in OS X and get the address to the emulator. Click on the settings cog and note the IP Address.

Connect to the emulator

In Windows, open your project in Visual Studio and hit Tools -> Android -> Android Adb Command Prompt. Write adb connect [the IP address] and you should then be connected to your emulator like in the image below.

Run your project

You should now see your device in Visual Studio!

Troubleshooting

Of course, things can go wrong. The issues I’ve encountered are these.
1) I had to disable my wireless network while connected to the local wired network. Surely this is a configuration issue that I just haven’t bothered with yet.
2) The emulator doesn’t show up. Restart Visual Studio.
3) You have to reconnect each time your Mac goes to sleep…
4) Firewalls… Make sure port 5555 is open for TCP from Windows to OS X.

Greg Shackles: Building Context-Aware Apps with Beacons

Recently I’ve been giving some talks on building context-aware apps with beacons, so I just wanted to quickly publish my content around that in one place. If it’s not immediately obvious, I think beacons and context-based technologies are seriously awesome.

.NET Rocks!

First, Carl and Richard were nice enough to invite me back on .NET Rocks! to talk about this stuff as well. You can find that episode over on their site, or in any of the usual places you subscribe to podcasts.


Here are the slides from my talk at my NYC Mobile .NET Developers Group:

The sample app used as part of that talk can be found on my GitHub page, which is a super basic scavenger hunt type app for iOS and Android.

Hopefully some of this helps inspire you to try out this stuff if you haven’t already, and start building awesome apps!

Adam Kemp: Decoupling Views In Multi-Screen Sequences

In my previous post I explained how to decouple individual views and why that is a good idea. In this post I will take that idea further and explain how to use this concept in more advanced UX scenarios involving multi-screen sequences.

Motivation

As a summary, the benefits of decoupling views are increased flexibility and allowing for more code reuse. For instance, a particular type of view may be used in multiple parts of your application in slightly different scenarios. If that view makes assumptions about where it fits within the whole app then it would be difficult to reuse that view in a different part of the app.

Still, at some level in your application you need to build in some kind of knowledge of which view is next. In the last post I gave a basic example where that knowledge lived in the Application class. There are many situations in which the Application class may be the best place for this kind of app-wide navigation logic, but some situations are more advanced and require a more sophisticated technique.

For example, it is also common to have a series of views within an app that always go together, but that sequence as a whole may be launched from different parts of the application. On iOS this kind of reusable sequence of views can be represented in a Storyboard1, but we can achieve the same result in code.

An Example

As an example let’s consider a sequence of views for posting a picture to a social network:

  1. Choose a picture from a library or choose to take a new picture.
  2. If the user chose to take a new picture then show the camera view.
  3. After the user has either chosen a picture or taken a new picture he can add a comment.
  4. The picture is posted.

At any point during this process the user should also have the option to cancel, which should return the user back to where he started.

Here are some questions to consider when implementing this UX flow:

  • How can we handle the cancel button in a way that avoids code duplication?
  • How can we avoid code duplication for the various parts of the app that might want to invoke this sequence? For instance, perhaps you can post a picture either to your own profile or on someone else’s profile or in a comment or in a private message.
  • How can we allow for flexibility such that different parts of the app can do different things with the chosen picture/comment?

The first two questions are about code reuse, which is one of our goals. We want to avoid both having these individual screens duplicate code to accomplish the same thing, and we also want to avoid duplication of code from elsewhere in our app. The last question is about how we can decouple this code itself from the act of using the results of the sequence (i.e., the picture and the comment). This is important because each part of the app that might use this probably has to do slightly different things with the results.

Creating the Views

The example flow has three unique screens:

  1. A screen that lets the user choose an image or choose to take a new picture.
  2. A screen for taking a picture.
  3. A screen for entering a comment.

As per my last post, each of these views should be written to be agnostic about how it’s used. There may be yet another part of the application that allows for editing a comment on an existing post, and you probably want to reuse the same view (#3) for that use case. Therefore you shouldn’t make any assumptions when implementing that view about how it will be used.

To accomplish this each view could be written with events for getting the results. Their APIs might look like this:

public class ImageEventArgs : EventArgs
{
public Image Image { get; private set; }

public ImageEventArgs(Image image)
{
Image = image;
}
}

public class CommentEventArgs : EventArgs
{
public string Comment { get; private set; }

public CommentEventArgs(string comment)
{
Comment = comment;
}
}

public class ImagePickerPage : ContentPage
{
public event EventHandler TakeNewImage;

public event EventHandler<ImageEventArgs> ImageChosen;

// ...
}

public class CameraPage : ContentPage
{
public event EventHandler<ImageEventArgs> PictureTaken;

// ...
}

public class ImageCommentPage : ContentPage
{
public event EventHandler<CommentEventArgs> CommentEntered;

public ImageCommentPage(Image image)
{
// ...
}

// ...
}

Constructing the Sequence

Now that we have our building blocks we need to put it all together. To do that we will create a new class that represents the whole sequence. This new class doesn’t need to be a view itself. Instead, it is just an object that manages the sequence. It will be responsible for creating each page as needed, putting them on the screen, and combining the results. Its public API might look like this:

public class CommentedImageSequenceResults
{
public static CommentedImageSequenceResults CanceledResult = new CommentedImageSequenceResults();

public bool Canceled { get; private set; }

public Image Image { get; private set; }

public string Comment { get; private set; }

public CommentedImageSequenceResults(Image image, string comment)
{
Image = image;
Comment = comment;
}

private CommentedImageSequenceResults()
{
Canceled = true;
}
}

public class CommentedImageSequence
{
public static Task<CommentedImageSequenceResults> ShowAsync(INavigation navigation)
{
// ...
}

// ...
}

Notice that in this case I’ve chosen to simplify the API by using a Task<T> instead of multiple events. This plays nicely with C#’s async/await feature. I could have done the same with each of the individual views as well, but I wanted to show both approaches. Here is an example of how this API could be used:

public class ProfilePage : ContentPage
{
// ...

private async void HandleAddImageButtonPressed(object sender, EventArgs e)
{
var results = await CommentedImageSequence.ShowAsync(Navigation);
if (!results.Canceled)
{
PostImage(results.Image, results.Comment);
}
}
}

Of course you could have similar code elsewhere in the app, but what you do with the results would be different. That satisfies our requirements of flexibility and avoiding code duplication.

Now let’s look at how you would actually implement the sequence:

public class CommentedImageSequence
{
private readonly TaskCompletionSource<CommentedImageSequenceResults> _taskCompletionSource = new TaskCompletionSource<CommentedImageSequenceResults>();

private readonly NavigationPage _navigationPage;
private readonly ToolbarItem _cancelButton;

private Image _image;

private CommentedImageSequence()
{
_cancelButton = new ToolbarItem("Cancel", icon: null, activated: HandleCancel);
_navigationPage = new NavigationPage(CreateImagePickerPage());
}

private void AddCancelButton(Page page)
{
page.ToolbarItems.Add(_cancelButton);
}

private ImagePickerPage CreateImagePickerPage()
{
var page = new ImagePickerPage();
AddCancelButton(page);
page.TakeNewImage += HandleTakeNewImage;
page.ImageChosen += HandleImageChosen;
return page;
}

private CameraPage CreateCameraPage()
{
var page = new CameraPage();
AddCancelButton(page);
page.PictureTaken += HandleImageChosen;
return page;
}

private ImageCommentPage CreateImageCommentPage()
{
var page = new ImageCommentPage(_image);
AddCancelButton(page);
page.CommentEntered += HandleCommentEntered;
return page;
}

private async void HandleTakeNewImage(object sender, EventArgs e)
{
await _navigationPage.PushAsync(CreateCameraPage());
}

private async void HandleImageChosen(object sender, ImageEventArgs e)
{
_image = e.Image;
await _navigationPage.PushAsync(CreateImageCommentPage());
}

private void HandleCommentEntered(object sender, CommentEventArgs e)
{
_taskCompletionSource.SetResult(new CommentedImageSequenceResults(_image, e.Comment));
}

private void HandleCancel()
{
_taskCompletionSource.SetResult(CommentedImageSequenceResults.CanceledResult);
}

public static async Task<CommentedImageSequenceResults> ShowAsync(INavigation navigation)
{
var sequence = new CommentedImageSequence();

await navigation.PushModalAsync(sequence._navigationPage);

var results = await sequence._taskCompletionSource.Task;

await navigation.PopModalAsync();

return results;
}
}

Let’s summarize what this class does:

  1. It creates the NavigationPage used for displaying the series of pages and allowing the user to go back, and it presents that page (modally).
  2. It creates the cancel button that allows the user to cancel. Notice how only one cancel button needed to be created, and it is handled in only one place. Code reuse!
  3. It creates each page in the sequence as needed and pushes it onto the NavigationPage‘s stack.
  4. It keeps track of all of the information gathered so far. That is, once a user has taken or captured an image it holds onto that image while waiting for the user to enter a comment. Once the comment is entered it can return both the image and the comment together.
  5. It dismisses everything when done.

Now we can easily show this whole sequence of views from anywhere in our app with just a single line of code. If we later decide to tweak the order of the views (maybe we decide to ask for the comment first for some reason) then we don’t have to change any of those places in the app that invoke this sequence. We just have to change this one class. Likewise, if we decide that we don’t want a modal view and instead we want to reuse an existing NavigationPage then we just touch this one class. That’s because all of the navigation calls for this whole sequence (presenting the modal navigation page, pushing views, and popping the modal) are in a single, cohesive class.

Summary

This technique can be used for any self-contained sequence of views within an application, including the app as a whole if you wanted. You can also compose these sequences if needed (that is, one sequence could reuse another sequence as part of its implementation). This is a powerful pattern for keeping code decoupled and cohesive. Anytime you find yourself wanting to put a call to PushAsync or PushModalAsync (or the equivalent on other platforms) within a view itself you should stop and think about how you could restructure that code to keep all of the navigation in one place.


  1. I do not actually recommend using iOS storyboards for multiple reasons, which I may eventually get around to documenting in a blog post.