assignment4

Assignment Four Overview

Due: Wednesday, November 21st before class time
Worth 16.5% of your course grade.

All assignments in this class will be emphasizing the theme of tangible interactive computing. We've now acquired significant experience working with the Arduino hardware prototyping platform. In assignments 2 and 3, we designed and implemented new interactive experiences using custom hardware that we built. With assignment 2, the focus was on creating new interactions with desktop computers. With assignment 3, we left the desktop environment all together and explored embedding computation around us (in our clothes, in the walls, etc.) and/or creating new types of off-the-desktop interactions (untethered from the laptop/desktop).

In this assignment, our aims stay the same: combine the virtual and physical worlds in unique ways but our approach changes. Enter: the Microsoft Kinect. In this assignment, you will use the Microsoft Kinect, which combines depth camera which combines an IR camera (for depth) and a traditional RGB camera (for visuals) into a single sensor.

What To Do

In this assignment, your goal is to use the Microsoft Kinect to create a physical-based interaction that is digitized, analyzed, and then used to control something in the real world. You will use the Kinect (and computer vision) to translate the physical into the virtual and the Arduino (or the Raspberry Pi or BeagleBone) to translate the virtual back to the physical.

Here are some examples

Note: the Microsoft Kinect also has a microphone and a speech recognition SDK. You are welcome to experiment with this as well but it is not required for this assignment. We also have a Microsoft Surface 2 (an interactive tabletop computer) that you are welcome to use for this assignment.

Setting up the Kinect Dev Environment

We will be using the Microsoft "Kinect for Windows" sensor (rather than the XBox360 Kinect sensor) for this assignment. What's the difference you ask? Good question. According to the Microsoft Kinect Developer FAQ, the "Kinect for Windows" sensor "The Kinect for Windows sensor is a fully-tested and supported Kinect experience on Windows with features such as “near mode,” skeletal tracking control, API improvements, and improved USB support across a range of Windows computers and Windows-specific 10’ acoustic models."

So, it would seem that the Windows-based sensor has a special "near mode" to deal with the fact that people will be much closer to the Kinect sensor on a PC/laptop than they would be with an XBox360 Kinect sensor (i.e.,using it in living rooms). Again, according to the Kinect Developer FAQ, "'near mode' enables the depth sensor to see objects as close as 40 centimeters and also communicates more information about depth values outside the range than was previously available. There is also improved synchronization between color and depth, mapping depth to color, and a full frame API."

On Windows

The "officially supported" way of setting up your development environment for the Microsoft Kinect on Windows involves the following steps:
  1. Download and Install Microsoft Visual Studio. You can get the full version from your Microsoft Dreamspark account or download Visual Studio Express (which is freely available).
  2. Download and install the Kinect for Windows SDK
  3. Set up the Kinect for Windows Developer Toolkit
  4. Plug the Kinect into your laptop/PC
  5. You can also install language packs if you so desire (French, German, Italian, Japanese, and Spanish are supported)

On Mac OS X

It used to be that the only way to get a Kinect sensor setup for the Mac OS X was by using open source software. Now, however, there are two approaches. The new Kinect SDK works on Windows running in a virtual machine. To get this to work, see: Using Kinect for Windows with a Virtual Machine. Note: you can download a copy of Windows using your Microsoft Dreamspark account. If you want to go the open source route, see this link or this link. I've not had a chance to explore this option, so you're on your own.

Kinect Resources

Helpful Links

Books

  • Jared St. Jean (editor of developkinect.com), Kinect Hacks: Tips & Tools for Motion and Pattern Detection, O'Reilly, 2012, Amazon
  • Greg Borenstein, Making Things See: 3D vision with Kinect, Processing, Arduino, and MakerBot, Make:Books, 2012, Amazon, Safari Online

Tools/Library Usage

As before, you can use whatever developer tools, IDEs, debuggers, libraries, and/or code snippets you find to support turning your ideas into a reality. Of course, you must keep track and cite the use of any code or libraries you use in your project. You must also include citations towards projects that inspired your own. Do not be shy to include as many links as you can that influenced your projects form or function in some way.

Remember to also include citations (with URLs) in your code via comments to all code that you borrowed from or extended from blogs, forums, open source, etc. If I find code that was copied and not appropriately cited, I will consider this a direct violation of the UMD Academic Integrity policy. You will not be penalized for re-using or re-appropriating cool things in this class, you will be penalized for not properly attributing them.

Assignment Deliverables

The assignment deliverables are due before lecture begins. We will be following assignment2 and 3's deliverable paradigm:
  • Utilize github to store and post your code. This should be publicly viewable and accessible You are welcome to use any license you like on the code itself (including no license at all--e.g., None). When you use other people's code, you must cite your source--even if it's just a blog post and a small snippet. I believe github provides academic accounts (for additional features, please check the website).
  • Post a Wiki write-up to your own wiki subpage on this wiki (example).
  • Upload a video demoing your submission to YouTube. You should include the link to the YouTube video in your Wikipage. Please take the video creation process seriously--video is one of the best forms to portray the interactivity and sheer awesomeness of your inventions. I hope that you create something you would feel proud of to show your friends or family.
  • Presentation/demo. On Nov 21st, we'll have a presentation/demo day. We will dedicate the whole 75 minutes to this. We have 7 teams so each presentation should be ~5 minutes and we'll use the remaining time in the class for demos.

Assignment Grading and Rubric

Most, if not all, assignments in this class will be graded on novelty, aesthetic, fun, creativity, technical sophistication, and engagement. All assignments (including the project) will be peer-reviewed by everyone in the class including me. Everyone, including me, will fill out the same feedback form. As before, we will rank our favorite assignments and the top two or three teams will receive a prize.

Completed Assignments

This page is editable by all members of the class (once you login to wikispaces). If you cannot edit this page, please send me an email or post to Piazza for help. Use the section below to link to your assignment write-ups.

1. Kinect - And then there was light!

Lee Stearns, PhD student, Department of Computer Science
Preeti Bhargava, PhD student, Department of Computer Science

Most of the labs in our campus have motion sensors installed to turn the lights off if there is no one in the room. However, these sensors can often be annoying when they turn the light off based on the time period for which they have detected no motion even if one person is present in the room. We propose a solution to this problem.

2. The Force

Leyla Norooz, Masters student, Human-Computer Interaction
Darren Smith, Masters student, Department of Computer Science

Ever wonder how it would feel if you could control "The Force"? Well, we've made that possible! ...Sort of. We've given you the ability to control a toy helicopter just by using the movements in your arms and body through the Kinect for Windows. Check it out!

3. The Friendly Waving and Dancing Bunny

Zahra Ashktorab, Masters student, Human-Computer Interaction
Allan Fong, PhD student, Department of Computer Science

It is the joy of every child, and quite possibly, every adult, to experience an animal’s reaction to one’s wave or gesture when they visit the zoo. Children tirelessly wave at pandas, elephants, and other animals in the exhibit for the hope of a glimmer of a reaction. For this assignment, we were inspired and motivated by this type of motion and gesture based interaction with animals. We built a bunny system to mimic and respond to people’s movement and gestures.

4. Light My Way

Adil Yalcin, PhD student, Computer Science
Nick Gramsky, PhD student, Computer Science

We present a remote flashlight that lights your way in a dark room. Don't wake up sleeping friends in the room, grab your flashlight virtually and just use your arm to control it freely. With this, you don't need no gadgets in your pocket!

5. Drone Control

Tansy Peplau, Masters student, Human-Computer Interaction
Rajan Zachariah, Masters student, Human-Computer Interaction

Do you have access to a Kinect? What about a Parrot AR Drone 2.0? No smartphone? You can play anyway! Presenting: the new AR Drone Control using only your body! The ability to feel like you're driving the drone; inside the drone. You are the drone.

6. LazyHH

Harish Vaidyanathan, Masters Student, HCI
Cheuk Yiu Ip, Ph.D Student, Computer Science

Have you ever looked for your phone when it rings? You need LazyHH! LazyHH will carry your phone to you when you swipe your hand. We present a gesture activated gadgets/beverages/candies transportation device to increase our everyday laziness.

7. LEGO NXT Meets Kinect

Cheng Fu, PhD student, Department of Geography
Kotaro Hara, Ph.D Student, Computer Science

We used Kinect and its C# API to develop a program to control the movement of LEGO Mindstorms NXT using upper body gestures.