Building Touch Interfaces for Windows Phones, Part 2

In Part 1 of this series, I described how to build simple touch interfaces for phone apps by processing mouse events. Recall that primary touch events – events involving the first finger to touch the screen – are automatically promoted to mouse events by the run-time, and that you can build a UI  that responds to single touch by writing mouse-event handlers. In this, the next installment in the series, we’ll dig deeper into Silverlight for Windows Phone and learn how to process touch events directly.

Both Silverlight for the desktop and Silverlight for Windows Phone support a static event named FrameReported, which belongs to the System.Windows.Input.Touch class. To process touch events directly, you register a handler for that event, typically in MainPage’s constructor. Here’s a simple example:

 

// MainPage.xaml.cs

public MainPage()

{

    InitializeComponent();

    Touch.FrameReported += new TouchFrameEventHandler(OnFrameReported);

}

 

void OnFrameReported(object sender, TouchFrameEventArgs e)

{

    // TODO: Process the touch event

}

 

Now, anytime a finger touches the screen, moves across the screen, or lifts off the screen, your OnFrameReported method will be called. Note that Touch.FrameReported events have application scope. Rectangles and other XAML objects fire mouse events, so the object targeted by the event is implicit in the event itself. By contrast, XAML objects don’t fire touch events. When you use Touch.FrameReported, you’ll almost invariably end up writing some hit-testing logic to determine what it was that the user touched. Fortunately, writing that logic generally isn’t difficult because Silverlight for Windows Phone provides an easy way to determine what’s under a touch point.

Before we go further, let’s discuss the pros and cons of using Touch.FrameReported to build touch interfaces. First, the pros:

  • Unlike mouse events, Touch.FrameReported events can be used to build multi-touch interfaces
  • Because Touch.FrameReported is supported in desktop versions of Silverlight, too, you can use the same code to respond to touch events on phones and in Silverlight apps running on PCs equipped with touch screens
  • Using Touch.FrameReported events is generally more performant than using mouse events

And then the cons:

  • Touch.FrameReported is a low-level touch API that lacks support for inertia and gestures
  • You can’t test multi-touch in the Windows phone emulator unless you’re using a multi-touch screen; therefore, you’ll probably need a phone to test on (all Windows phones feature a capacitive multi-touch screen that supports at least four simultaneous touch points, or fingers)
  • Multi-touch code can be tricky to write

There isn’t a lot of detailed information in the documentation or the blogosphere about Touch.FrameReported events, and some of the information that is out there is inaccurate. The good news is that touch events aren’t at all difficult to comprehend once you wrap your mind around a few important concepts.

For starters, a Touch.FrameReported event handler receives a reference to a TouchFrameEventArgs object, which exposes three important methods:

  • GetPrimaryTouchPoint, which returns a TouchPoint reference to the primary touch point
  • GetTouchPoints, which returns a TouchPointCollection with one or more TouchPoint objects representing touch points
  • SuspendMousePromotionUntilTouchUp, which suspends the promotion of primary touch events to mouse events until all fingers have lifted off the screen and a new sequence of touch events begins

TouchFrameEventArgs also contains a public property named Timestamp, which expresses in ticks (milliseconds) when the touch event occurred. This property might be useful if, for example, you were implementing support for double taps and needed to know when the current touch event occurred relative to the previous one.

TouchFrameEventArgs’ primary purpose is to give you access to TouchPoint objects representing touch points on the screen – points with which a finger is currently in contact. The TouchPoint class exposes four important properties of its own:

  • Action, which indicates whether a finger just touched the screen (TouchAction.Down), moved across the screen (TouchAction.Move), or lifted off the screen (TouchAction.Up)
  • Position, which locates the touch point using X and Y pixel coordinates relative to the upper-left corner of the object passed to GetPrimaryTouchPoint or GetTouchPoints
  • Size, which presumably (but not, at present, very reliably) provides the width and height of the area currently in contact with a finger
  • TouchDevice, which represents the “device” (finger) currently in contact with the touch point and exposes two useful properties of its own: Id, which contains a unique integer ID identifying the device and may be used to correlate touch events emanating from a sequence of actions; and DirectlyOver, which identifies the topmost UI element under the touch point

That’s admittedly a lot to take in all at once, so let’s write some code to help clarify matters. In the previous installment in this series, I presented a simple code sample that turns a red rectangle to blue when touched. Here’s the equivalent example written with Touch.FrameReported events:

 

// MainPage.xaml

<Rectangle x:Name="Rect" Width="300" Height="200" Fill="Red" />

 

// MainPage.xaml.cs

public MainPage()

{

    InitializeComponent();

    Touch.FrameReported += new TouchFrameEventHandler(OnFrameReported);

}

 

void OnFrameReported(object sender, TouchFrameEventArgs e)

{

    TouchPoint point = e.GetPrimaryTouchPoint(null);

 

    if (point.Action == TouchAction.Down && point.TouchDevice.DirectlyOver == Rect)

    {

        Rect.Fill = new SolidColorBrush(Colors.Blue);

    }

}

 

The moment the finger touches the screen, the Touch.FrameReported event fires. We call GetPrimaryTouchPoint to grab a TouchPoint object representing the location that was touched, and then, seeing that the action is TouchAction.Down, we use the DirectlyOver property to see if the touch point is over the rectangle. If the answer is yes, we change the fill color.

If you run this code, you’ll find that touching the rectangle indeed changes its color. However, if you first touch the screen outside the rectangle, and then use another finger to touch the rectangle with the first finger still down, the color doesn’t change. Why?

To understand why, we must look deeper into the internals of Touch.FrameReported. Suppose you touch the screen with one finger, move it around a little, and then lift the finger. What ensues is a series of Touch.FrameReported events, each carrying with it just one TouchPoint object. You can get a reference to that TouchPoint by calling GetPrimaryTouchPoint or GetTouchPoints; it doesn’t matter, because when there’s just one finger touching the screen, all TouchPoints are primary touch points.

However, something altogether different happens if you place two fingers on the screen and move them around. Each Touch.FrameReported event will contain two TouchPoint objects: one for each finger that’s currently in contact with the screen. If you call GetPrimaryTouchPoint, you’ll get a TouchPoint object representing the first finger that made contact with the screen. But if you call GetTouchPoints, you’ll receive two TouchPoint objects. By calling GetPrimaryTouchPoint, you’re basically ignoring any finger but the first. That’s why the rectangle doesn’t change color in the previous example if you touch it with the second finger.

As an aside, many blogs and magazine articles advise you to check the reference returned by GetPrimaryTouchPoint for null. While that may be true for desktop touch screens (I don’t have one, so I can’t say for sure), my experience is that it isn’t necessary on phones. In Silverlight for Windows Phone, GetPrimaryTouchPoint always returns a TouchPoint reference. If the user does something novel such as touch a finger to the screen, then touch another, then lift the first finger from the screen, and then lift the second finger, the second touch point becomes the primary touch point the moment the first finger lifts up.

Could we modify the previous code sample so that the rectangle changes color when it’s touched with any finger? You bet. Here’s the code to prove it:

 

// MainPage.xaml.cs

public MainPage()

{

    InitializeComponent();

    Touch.FrameReported += new TouchFrameEventHandler(OnFrameReported);

}

 

void OnFrameReported(object sender, TouchFrameEventArgs e)

{

    TouchPointCollection points = e.GetTouchPoints(null);

 

    foreach (TouchPoint point in points)

    {

        if (point.Action == TouchAction.Down &&

            point.TouchDevice.DirectlyOver == Rect)

        {

            Rect.Fill = new SolidColorBrush(Colors.Blue);

        }

    }

}

 

Now run this on a phone and try the following:

  1. Touch one finger to the screen outside the rectangle
  2. With the first finger still down, touch the rectangle with a second finger

Voila! The rectangle changes color because we’re now using all touch points to hit-test the rectangle rather than just primary touch points.

This is a step in the right direction, because we just implemented a multi-touch UI. But there’s still more to think about when it comes to building rich user interfaces. What if, for example, you wanted to put two rectangles on the screen, and to allow the user to move both rectangles at once using two fingers? You’d need to do the following:

  1. When a finger goes down over a rectangle, associate the ID that identifies that finger with the rectangle
  2. Each time a Touch.FrameReported event fires, look at all the touch points accompanying the event and for each one, figure out which rectangle to move by examining the touch point’s ID (available from the TouchDevice.Id property)

One way to associate a finger ID with a rectangle is to store the ID with the rectangle itself, perhaps in the rectangle’s Name or Tag property. Another way to do it is to build a dictionary that correlates IDs to rectangles.

Of course, for all you know, I could be making all this up. To prove that I’m not, here’s a final code sample that presents two rectangles that can be moved independently and concurrently:

 

// MainPage.xaml

<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">

    <Rectangle x:Name="RedRect" Width="100" Height="100" Fill="Red">

        <Rectangle.RenderTransform>

            <TranslateTransform x:Name="RedTransform" Y="-100" />

        </Rectangle.RenderTransform>

    </Rectangle>

    <Rectangle x:Name="BlueRect" Width="100" Height="100" Fill="Blue">

        <Rectangle.RenderTransform>

            <TranslateTransform x:Name="BlueTransform" Y="100" />

        </Rectangle.RenderTransform>

    </Rectangle>

</Grid>

 

// MainPage.xaml.cs

public partial class MainPage : PhoneApplicationPage

{

    private Dictionary<int, RectInfo> _rects = new Dictionary<int, RectInfo>();

       

    // Constructor

    public MainPage()

    {

        InitializeComponent();

 

        // Register handler for Touch.FrameReported events

        Touch.FrameReported += new TouchFrameEventHandler(OnFrameReported);

    }

 

    private void OnFrameReported(object sender, TouchFrameEventArgs e)

    {

        TouchPointCollection points = e.GetTouchPoints(null);

 

        foreach (TouchPoint point in points)

        {

            if (point.Action == TouchAction.Down)

            {

                // Find out if a rectangle was touched

                Rectangle rect = null;

 

                if (point.TouchDevice.DirectlyOver == RedRect)

                    rect = RedRect;

                else if (point.TouchDevice.DirectlyOver == BlueRect)

                    rect = BlueRect;

 

                // If the answer is yes, associate the "device" (finger) ID with

                // the rectangle and store information regarding that rectangle.

                // Then change the rectangle’s fill color to yellow.

                if (rect != null)

                {

                    TranslateTransform transform =

                        rect.RenderTransform as TranslateTransform;

                    RectInfo ri = new RectInfo() { Rect = rect, Translation =

                        new Point(transform.X, transform.Y), StartPos = point.Position,

                        Fill = rect.Fill };

                    _rects.Add(point.TouchDevice.Id, ri);

                    rect.Fill = new SolidColorBrush(Colors.Yellow);

                }

            }

            else if (point.Action == TouchAction.Move)

            {

                // Find the rectangle (if any) associated with the finger being moved

                int id = point.TouchDevice.Id;

 

                RectInfo ri = null;

                _rects.TryGetValue(id, out ri);

 

                if (ri != null)

                {

                    Rectangle rect = ri.Rect;

                    TranslateTransform transform =

                        rect.RenderTransform as TranslateTransform;

 

                    // Get the current position of the cursor

                    Point pos = point.Position;

 

                    // Compute the offset from the starting position

                    double dx = pos.X – ri.StartPos.X;

                    double dy = pos.Y – ri.StartPos.Y;

 

                    // Apply the deltas to the transform

                    transform.X = ri.Translation.X + dx;

                    transform.Y = ri.Translation.Y + dy;

                }

            }

            else if (point.Action == TouchAction.Up)

            {

                // Find the rectangle (if any) associated with the finger being moved

                int id = point.TouchDevice.Id;

 

                RectInfo ri = null;

                _rects.TryGetValue(id, out ri);

 

                if (ri != null)

                {

                    // Restore the original fill color of the rectangle associated

                    // with the finger that was just lifted

                    Rectangle rect = ri.Rect;

                    rect.Fill = ri.Fill;

 

                    // Remove the finger ID from the dictionary

                    _rects.Remove(id);

                }

            }

        }

    }

}

 

public class RectInfo

{

    public Rectangle Rect;

    public Point Translation;

    public Point StartPos;

    public Brush Fill;

}

 

You now have a concrete example demonstrating how to use Touch.FrameReported events to implement multi-touch interfaces on Windows phones. There’s still more to do, however. We haven’t yet considered supporting gestures such as tap-selects and pinch-zooms. The subject of gestures is one that we’ll begin to address in the next article.

We deliver solutions that accelerate the value of Azure.

Ready to experience the full power of Microsoft Azure?

Start Today

Blog Home

Stay Connected

Upcoming Events

All Events