JugEm: Software to help the world learn how to juggle

JugEm: Software to help the world learn how to juggle
David McBrierty
School of Computing Science
Sir Alwyn Williams Building
University of Glasgow
G12 8QQ
Level 4 Project — January 31, 2013
Abstract
We show how to produce a level 4 project report using latex and pdflatex using the style file l4proj.cls
Education Use Consent
I hereby give my permission for this project to be shown to other University of Glasgow students and to be
distributed in an electronic format. Please note that you are under no obligation to sign this declaration, but
doing so would help future students.
Name:
Signature:
i
Contents
1
2
Introduction
1
1.1
What this Project is . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.3
Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Background Research
2
2.1
Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2.1.1
What is Juggling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
2.1.2
Science & Juggling: What goes around, comes around... . . . . . . . . . . . . . . . . .
3
2.1.3
Juggling Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Technology Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.2.1
Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.2.2
Wii Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.2.3
Kinect Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
Existing Software Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.3.1
Learn to Juggle Clubs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.3.2
Wildcat Jugglers Tutorials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.3.3
Juggle Droid Lite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.3.4
Software Survey Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
2.2
2.3
3
Requirements
11
3.1
Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
3.2
Non Functional Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
ii
4
Design
13
4.1
Library Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
4.1.1
Kinect Software Development Kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
4.1.2
EmguCV Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4.1.3
Microsoft Speech Platform 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4.1.4
XNA Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
Game Screen Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
4.2.1
Screens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
4.2.2
Screen Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
4.3.1
Detecting Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
4.3.2
Event Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
4.3.3
Event Delegation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
4.3.4
Event Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.2
4.3
5
Implementation
22
5.1
Kinect Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
5.1.1
Depth Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
5.1.2
Color Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
5.2
Depth Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
5.3
Ball Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
5.3.1
Ball Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
5.3.2
Color Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
5.4.1
Frames and Frame Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
5.4.2
Detecting Events Using Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
5.4.3
Missed Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
Processing Events and Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
5.5.1
35
5.4
5.5
Validating The Start of a Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iii
5.5.2
6
36
Evaluation
38
6.1
Requirements Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
6.2
Test Driven Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
6.2.1
Inferring Event Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
System Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
40
6.3.1
User Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
6.3.2
Mini Game Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
6.3.3
Horizontal Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
6.3.4
Vertical Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
6.3.5
Further Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
6.3
7
Matching the Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusion
47
7.1
Project Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
7.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
7.3
Personal Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
Appendices
48
A Evaluation Tasks
49
B Juggling Tutorial
53
C Evaluation Questionnaire
56
D Questionnaire Responses
59
E Mini Game Evaluation Data
63
iv
Chapter 1
Introduction
1.1
What this Project is
1.2
Motivation
1.3
Dissertation Outline
Whats coming up in this report
1
Chapter 2
Background Research
This chapter will discuss the Background Research carried out prior to the systems design and implementation.
The Background Research carried out was completed in the form of three surveys, each which is discussed in
this chapter.
2.1
Literature Survey
The Literature Survey described in this section looks into the history of juggling and the scientific principles
behind it.
2.1.1
What is Juggling
Juggling is described as the act of continuously tossing into the air and catching (a number of objects) so as
to keep at least one in the air while handling the others. Juggling has been a past-time for many centuries, the
earliest known pictorial evidence of juggling (shown in Figure 2.1) was found in an ancient Egyptian burial site
that was used during 1994-1781 B.C [9].
Figure 2.1: Ancient Juggling Hieroglyphic
Whilst the art of juggling might have been performed for different reasons many years ago, there is little
doubt that the people depicted in the image are juggling. Juggling was also part of the first ever modern day
2
circus in the late 18th century [16]. There are many different varieties of Juggling around today; Juggling can
use many different patterns (such as the shower or also the cascade), Jugglers can use a variety of objects (such
as clubs, rings or balls) and Jugglers can work as part of a team (instead of on their own). There is even a form of
Juggling where by the ball remains in contact with the body at all times (known as Contact Juggling [23]). Over
the years, it has been used in proverbs [4], early 20th century psychological research [26] and even in robotics
[2]. At its roots Juggling is both a highly skillful and entertaining pastime, requiring an immense hand-eye coordination to master.
2.1.2
Science & Juggling: What goes around, comes around...
Whilst entertaining, the scientific aspects behind juggling have also been researched. An early mathematician by
the name of Claude E Shannon (regarded as many as the father of modern day Information Theory [1]) was also
a keen juggler. Shannon published two papers on juggling [24], one titled Claude Shannons No-Drop Juggling
Diorama which discusses a toy machine that was given to Claude of clowns juggling rings, balls and clubs
(shown in Figure 2.2).
Figure 2.2: Claude Shannons No-Drop Juggling Diorama
The other paper, entitled Scientific Aspects of Juggling, discusses a brief history of juggling and introduces
Shannons Theorem for the Uniform Juggle. A Uniform Juggle is a juggling pattern which has the following
properties:
1. There is never more than one ball in one hand at any time (commonly called mulitplexing)
2. The flight times of all balls are equals
3. The time each hand has a ball in it is equal
4. The time each hand is empty is equal
These properties, at first, may seem quite restrictive, but they are very common amongst jugglers. Many
juggling patterns follow these principles (including the three, five and seven ball cascade patterns) and these
principles can be adapted to include patterns that involve juggling under the leg, from behind the back or even
over the head juggling. Shannons theorem is as follows:
F +D
B
=
V +D
H
3
Parameter
F
D
V
B
H
Name
Flight Time
Dwell Time
Vacant Time
Balls
Hands
Explanation
Time a ball spends in flight
Time a hand spends holding a ball
Time a hand spends being empty
Number of Balls
Number of Hands*
*Shannons theorem works when the number of hands is greater than 2
This theorem seems simple; in a perfect juggle, the flight time of a ball (F) and the vacant time of a hand (V)
should be equal, the dwell time will then make up the rest of the time taken for a ball to complete a cycle.
Moreover, if a juggler holds a ball in one hand longer (D) than is expected, then they will have to start throwing
the balls quicker (by altering the flight time of the ball or the time a hand spends empty) in order to maintain a
uniform juggle.
The JugEm system uses this equation to calculate how well a user is juggling. The system rearranges the
above equation to calculate a Shannon Score as follows:
score =
(F + D)H
(V + D)B
This number will be 1 when the juggle is considered to be a perfect Uniform Juggle (using Shannons Theorem).
2.1.3
Juggling Notation
Juggling also has various forms of notation used to describe patterns. There are various flavors of diagram based
notations, and a popular numerical based notation.
Ladder Diagrams
Ladder Diagrams are 2-Dimensional diagrams, with the vertical dimension representing time and the top of the
diagram (or ladder) being 0 and time increasing downwards. Ladder Diagrams have the benefits of showing time
related information about the pattern, however they are not particularly good for visualizing the actions involved
in juggling that pattern.
Figure 2.3 shows the Ladder Diagram for the 3 Ball Cascade pattern. Each ’rung’ on the ladder represents a
beat of time. Throws are represented in red and catches in green. With respect to the hands, the diagram shows
that each hand catches and throws alternately. With respect to the ball the diagram shows each ball remaining in
the hand it is caught by for one beat of time (dwell time) and being in the air for two beats of time (flight time).
Casual Diagrams
Casual Diagrams are another 2-Dimensional representation of juggling patterns. In contrast to ladder diagrams,
they show the events that lead to having to throw ball. This gives the diagram a simpler form than a Ladder
Diagram (which can lead to making the pattern easier to visualize).
4
Figure 2.3: Ladder Diagram of 3 Ball Cascade
Figure 2.5 shows the Casual Diagram for the 3 ball Cascade pattern. The diagram shows that when a throw
is made from the left hand, the next action needed is to throw from the right hand (and so forth). The arrows in
the diagram are used to show which throws cause other throws.
Figure 2.4: Casual Diagram of 3 Ball Cascade
Siteswap Notation
Siteswap notation is a numerical representation of a juggling pattern. The person who invented Siteswap notation
is still disputed, as three very similar notations were invented independently around the early 1980’s, the first by
Paul Klimek in 1981 [14], the second by Bruce Tiemann in 1985 [7] and the third by Mike Day, Colin Wright,
and Adam Chalcraft, again in 1985 [12]. Over the years there have been various adaptations to Siteswap so that
it can represent various forms of juggling including multiplex patterns (having more than one ball in a hand at
one time), using more than two hands or using passing patterns.
Given the inherent mathematical complexity and variety involved in Siteswap notation, this section will not
fully describe how Siteswap notation works. For the remainder of this report, it is enough to know that each
number in the Siteswap pattern relates to how long the balls is in the air for.
The Siteswap notation for a 3 Ball Cascade is 3333333... (which is often shortened to just 3). The Siteswap
notation for a 3 Ball Shower pattern is 51 (as in this pattern one ball is thrown high, and another quickly below
it).
2.2
2.2.1
Technology Survey
Mobile Devices
The software could also be written for a mobile device (mobile phone, tablet etc.). Many modern day mobile
devices are equipped with various types of sensors that can be used to track the devices position and motion.
Table 2.2.1 outlines some of the sensors available on the two most common mobile device types available today
(Android and iOS).
5
Device
Mobile Device
Platform
Android
Mobile Device
iOS
Games Console
Nintendo Wii
Desktop Application
Kinect Sensor
Sensors
Accelerometer, Camera, Light
Sensor, Gyroscope, Gravity*,
Proximity*, Rotation*
Accelerometer, Camera, Light
Sensor, Gyroscope*
Accelerometer, PixArt Optical
Sensor, Infrared light sensor
Camera, Infrared Depth Sensor,
Microphone Array
Cost
Medium
API
Open
High
Restricted
Medium
Open Source
Medium
Open
* only available on certain models of the device
Mobile Device Sensors
Both Android and iOS mobile devices contain sensors that could be used to track users hands movements (or
perhaps even using the device as a ball that is being juggled). Android devices tend to have a wider variety of
sensors on each device[11], however both devices types share the same common sensors.
One of the main concerns that separates the two devices is the API and Cost. The Android Software Development Kit (more commonly called an SDK) is based on the Java platform, and so runs on both Unix and
Windows machines, where as the iOS SDK is limited to running only on Mac platforms, and uses the Objective
C programming language. These restrictions for the iOS development platform (coupled with the fact the cost of
such devices is particularly high), meant that this platform was not considered for this system.
Given the cost of devices and the good availability of the Android SDK, two potential systems were considered for developing the tutorial software on a mobile device.
Juggling the actual device(s)
Data could be gathered from a device(s) Accelerometer on the x, y and z axes. This data could then be used
to determine an approximate trajectory that a device has taken when thrown, however this method has various
impracticalities. Firstly, expecting a user to juggle multiple mobile devices at the same time will undoubtedly
lead to the devices being dropped or damaged. Secondly, (assuming the devices were unbreakable and in abundant supply) the devices would need to somehow communicate (either with each other or all communicating to
another device), and finally, if using only a single device, the software would be unable to distinguish between
juggling one item and or juggling one hundred items.
Strapping devices to a user’s hand(s)
Another way the mobile devices could be used is by strapping them to the user’s hands. When juggling, a
user should throw with their elbow and maintain their hands in a level position. This would allow a user juggling
to be tracked using their hands position. This could be achieved by strapping a mobile device to a user’s hands
in order to record their position. Ideally, the user would need two devices (one for each hand), this again would
present eh problem of the devices communicating with each other in some manner. However one device could be
used to track the motion of one hand. Regardless of the number of hands that are tracked, this method does not
allow for gathering data on the trajectories of the items being juggled or even how many items are being juggled.
One of the primary issues with using Accelerometer data when working out positions is that it is highly inaccurate. David Sachs gives an interesting talk discussing this issue [6]. If either of the methods were used
6
above, retrieving the position of the device in order to track the users hands would certainly be desired. To get
position data from the mobile device, the accelerometer should be integrated twice (following basic mathematical principles). The drift associated with single integration is a problem, so the drift in the result involved with
double integration is much higher. For this reason, working with mobile devices was ruled out, as the results and
mathematics behind the system can prove unpredictable.
2.2.2
Wii Controllers
Previous projects have also attempted to make juggling systems using Wii Controllers. Wii Remotes contain an
Accelerometer, a PixArt Optical Sensor and an Infrared light sensor. These sensors can be combined to track a
users hands whilst they are juggling.
Nunez et al have previously devised a system that does this very task[15]. They use the Wii Remotes to track
the users hand position and movements and use this data to juggle balls in a 3D Virtual Environment. They
also use the Wii Remotes rumble packs to provide feedback to the user. In their paper they highlight the fact
that the IR Camera on the controller is particularly good when tracking multiple objects, and also that a similar
system using the Kinect IR Camera would also work, however their project focuses on being light-weight and
cost effective, given the higher price for Kinect Sensor it is obvious that their project would be benefited by using
the Wii Controller.
WiiSticks [29] is another example of a project that tracks user’s hands whilst juggling a single baton (or stick).
2.2.3
Kinect Application
The Kinect Sensor is a Microsoft device that was released in 2010 for the XBox 360 games console that allows
users to interact and control their games console using gestures and spoken commands.
Figure 2.5: Microsoft Kinect Sensor
The Kinect Sensor is a device containing 3 sensors that allow it to track motion. It contains an RGB Camera
that captures color images (at a 1280x960 pixel resolution), an Infrared Depth Sensor that measures depth information between objects and the sensor, and a 24-bit Microphone Array (containing 4 microphones) that allow
digital audio capture [18].
One of the primary obstacles when using Kinect is that the Sensor and SDK do not include libraries for
tracking objects. Currently, the Kinect sensor can only track Skeletal Bodies (such as a human), however it
is capable of tracking objects if we create or reuse a third party library. Various attempts have been made by
7
third parties on this very subject, and most involve combining the depth and color inputs and performing varying
amounts of image processing to filter out noise and find objects. Once these objects are found various methods
can be used to track or calculate the trajectory of the object.
2.3
Existing Software Survey
This section will discuss existing software systems that aim to help people learn how to juggle (in one form
or another). During this research, it was very apparent that there are very few interactive software systems for
teaching people to juggle. Most take the form of animations or other forms to teaching people some aspect of
juggling. Some of the better software systems will be discussed in this section.
2.3.1
Learn to Juggle Clubs
Figure 2.6: Learn To Juggle Clubs Screenshots
Learn to Juggle Clubs is an Android application that provides the user with video tutorials on how to juggle
with clubs [10]. The application provides a total of four videos (from juggling a single club to juggling them
behind the back and under the leg), which describe, in rather brief detail, how a user should juggle clubs. Even
although the application focuses mainly on clubs, some of the points it raises would be valid for a user learning
to juggle with balls (or other objects).
The application itself seems to just be a web page that shows the videos and allows the users to view them.
This method of teaching is not particularly interactive, but such an application could certainly be used to learn
some of the principles of club juggling.
2.3.2
Wildcat Jugglers Tutorials
Wildcat Jugglers group provide a far more extensive set of video based tutorials [3] for people wishing to learn
juggling patterns. Their website contains 74 tutorials, each one for a different trick or pattern. The tutorials come
in the form of web based videos, with accompanying text to talk the user through how the pattern should be
performed.
8
Figure 2.7: Wildcat Juggling Tutorial Screenshots
The website also makes reference to another web based set of tutorials from the Tunbridge Wells Juggling Club
[28] which contains even more patterns, however the Tunbridge Wells Clubs website only contains text based
description of how to perform patterns. Together, both the Wildcat videos and Tunbridge Wells descriptions
could prove a useful reference for beginners (and even more advanced) juggling tips.
2.3.3
Juggle Droid Lite
Figure 2.8: Juggle Droid Lite Screenshots
Juggle Droid Lite is an Android application for visualizing Siteswap notation [25]. It allows users to input
different Siteswap patterns and will show an animation of a stick person juggling balls in the pattern. The application also contains a list of over 200 existing Siteswap patterns that allow users to see the different parts of a
pattern being animated.
Whilst beginners might not fully understand Siteswap notation, the patterns that come with the application are
all sorted and given explanatory names, allowing beginners to see the patterns in action. The application also
allows users control over the speed of the juggle by tweaking the beats per minute and the dwell time associated
with the patterns.
9
2.3.4
Software Survey Conclusion
The software discussed in this section is not exhaustive. Various other tutorial websites are available on the Internet, however they are mostly either video or text descriptions of how to juggle. The Siteswap notation visualizers
are also very common when it comes to juggling related software, whilst they might be slightly more complicated
that a first time juggler is looking for, they could no doubt prove useful.
Whilst these sites and applications provide a good amount of tips and tricks for beginning jugglers, they do
lack interactivity with the user. Watching videos and reading descriptions of what to do is all well and good, but
juggling is skill that is most definitely learned by way of participation. The JugEm system will attempt to bridge
this gap by providing the user with a more immersive way of learning how to juggle.
10
Chapter 3
Requirements
The Requirements chapter will outline the required functionality and behavior of the system (functional requirements) as well as the constraints and qualities that the system should have by nature (non functional requirements).
The requirements for the system were gathered through meetings and discussions with the client about the
functionality and behavior that would be useful for users wishing to learn how to juggle.
3.1
Functional Requirements
When capturing the Functional Requirements, the MoSCoW method was used. The MoSCoW rules are a typical scheme for prioritizing requirements. They allow a software systems requirements to be separated into four
distinct categories. Each category reflects a priority for the requirement. The four distinct categories of the
MoSCoW scheme are given in the list below
Must Have The system will not be successful without this feature
Should Have This feature is highly desirable
Could Have This feature is desirable if everything else has been achieved
Would Like To Have This feature does not need to be implemented yet
Using the MoSCoW method allows the requirements to be prioritized, and, before any development has started,
allows consideration and estimations for what will be achievable by the system to be developed and also what
can be considered to be out with the scope of the system.
11
Must Have
Detect user throwing a
ball
Detect user catching a ball
Detect a peak of a ball
Track users hand positions
Provide report on a juggling sessions
Ability to detect 1,2 and 3
ball patterns
Should Have
Calculate Shannon Score
for a juggle
Suggestions on improving
juggling technique
Detect user dropping a
ball
Ability to detect 4 ball
patterns
Could Have
Tasks to improve timing
Would Like To Have
Ability to define patterns
using siteswap notation
Tasks to improve throwing
The Requirements gathering process was an interative one. Over the course of meetings with the client, new
requirements were added, removed and priorities changed as the project progressed. Details on which of these
Requirements were met by the system are provided in the Evaluation chapter.
3.2
Non Functional Requirements
The non functional requirements are used to describe constraints and qualities that are possessed by the system.
Typically Non-Functional Requirements are harder to gather than Functional ones, however the purpose of considering the Non Functional Requirements is to improve the overall system. The Non Functional Requirements
are listed below.
• The system has been designed and tested to run on a standard Intel Desktop PC running Windows 7 or
later. The system may run on earlier versions of Windows, however it is not guaranteed.
• The system uses a standard XBox 360 Kinect Sensor as input.
• When using the system, juggling balls of different solid colors should be used (Red, Green and Blue) and
they should typically be around 65mm in diameter.
• The system is specifically designed to be run in a well-lit spacious environment (of around 3m2 )
The JugEm system was performs to an acceptable standard (30 frames per second) on a machine using an
Intel Core i5-3570K CPU with a clock speed of 3.40GHz and with 8 Gigabytes of system RAM.
12
Chapter 4
Design
This chapter outlines the high-level design strategy and architectural patterns applied to the JugEm system.
4.1
Library Support
This section will discuss the various software libraries that are used in the JugEm system.
4.1.1
Kinect Software Development Kit
The Kinect for Windows Software Development Kit [17] provides the system with all the necessary classes and
methods so that it can inter-operate with a Kinect Sensor. The Kinect SDK is recommended to be run using a
Kinect Sensor for Windows, however the libraries still work with an XBox Kinect Sensor (however, the extras
features that are unique to the Windows Kinect Sensor are not available).
The Kinect SDK provides the drivers necessary to use the Kinect Sensor on a windows based machine, all
the necessary APIs and device interfaces along with Technical Documentation and some source code samples.
From these APIs, the system is able to retrieve three things:
Infrared Depth Data This data gives the distance from the sensor that the surroundings it can see are
(in millimeters).
RGB Images This data gives an RGB image of what the Kinect Camera can see.
Skeleton Information This provides a list of the potential human bodies the Sensor can see in the surroundings.
Audio Capture The Microphone Array in the Kinect Sensor can capture audio from the surroundings.
The data provided by the Kinect Sensor is the primary source of data, the system will then process this data in
order to work out when a user is in front of the Kinect Sensor juggling. The Skeleton data that is provided by
the APIs only provides the positions of users joints (the joints tracked by the Kinect Sensor are given in Figure
4.1 [27]). The system uses this data to get the position of the users wrists and then performs various operations
on the Color and Depth data in order to do Object Detection and detect Events (these operations are discussed in
detail in the Implementation chapter).
13
During development, a new version of the Kinect SDK was released (from v1.5 to v1.6) [19], this involved
minor changes to the system during development so that it operated with the latest version of the SDK.
Figure 4.1: Kinect Skeleton Joints
4.1.2
EmguCV Image Processing
For the system to detect the juggling balls that the user is throwing and catching, various types of image processing were carried out on the data from the Kinect Sensor. In order to the this image processing, the EmguCV
library is used by the system. EmguCV is a cross platform wrapper library that gives various .NET languages
(VC++, C#, Visual Basic and others) access to the OpenCV image processing library. The OpenCV library is a
free, open source image processing library developed by Intel [13].
Using the EmguCV wrapper gives the system access to the classes and methods it needs to perform:
Depth Image Thresholding This method is used to remove all the depths from the Depth Image that are greater
than a certain distance.
Contour Detection This method allows the system to detect the contours present in the Depth Image (after
filtering).
Color Image Conversion and Filtering These methods are used to convert the Kinect Sensor’s Color Image
into the Hue, Saturation and Value color space and filtering it to perform color detection.
4.1.3
Microsoft Speech Platform 11
In order to allow the system to recognize the voice commands to tell it to start and stop recording a juggling
session, the system has used the Microsoft Speech Platform [22]. The Microsoft Speech Platform provides the
system with an API that allows the use of redistributable Grammars and Voice Recognition methods that allow
the system to recognize voice commands using the Kinect Sensor’s microphone array.
14
4.1.4
XNA Framework
The program is developed using the XNA Framework 4.0 from Microsoft. This framework provides the program
with all the necessary run time libraries and classes to develop games for mobile devices, X-Box Arcade and the
Windows platforms. The main class provided by this framework is the Game class, the class diagram for this
class is given Figure 4.2.
Figure 4.2: XNA Game Class
Every program that uses the XNA Framework contains a class that is an extension of the Game class. The
game class contains various properties and methods, the most important ones are outlined below.
TargetElapsedTime This property is the target frame rate of the program. The lower this value, the more often
the Games classes methods will be called.
Initialize This method is called once at the start of the game, it allows the program to set up any variables it
needs (for example it can setup the Kinect Sensor).
LoadContent This method is called before the first frame to allow the program to set up and graphics, audio or
other game content that the program may need.
Update This method is called for every frame, and is used to update any variables, state etc that the program
needs to update.
Draw This method is also called for every frame, and is used to render all the necessary graphics to the user’s
screen.
The class in JugEm that extends this Game class is the JugEmGame.cs class. On top of this class there is
a Game State Management framework, which also contains classes with the methods mentioned above. This
framework is described below.
15
4.2
Game Screen Management
As the program contains various different types of screens (for example, menu screens, report screens and game
screens), a framework was used to provide a sound architecture for development. The framework used was
adapted from a piece of code developed by Microsoft for this purpose. REFERENCE. Using the method discussed in this chapter allowed the program to be neatly separated into various components that are displayed to
the user and updated accordingly. The basic outline of the architecture provided by this framework is given in
Figure 4.3.
Figure 4.3: Game Screen Management Architecture
4.2.1
Screens
The main abstract class provided in this framework in the GameScreen class. Every type of screen used in
the program is a subclass of GameScreen, which provides various methods and properties as outline in Figure
4.4. The GameScreen class contains some similar methods to the previous methods discussed for the XNA
Game class. These methods perform the same functions as they would in a subclass of Game class, allowing
each screen in the game to have different work done in their own Update and Draw methods. The ScreenState
enumerated type is used by the Screen Manager to determine which screen to display and also provide simple
transition effects when a screen becomes visible or is closed.
The MenuScreen class provides various methods and variables that allow Screens to contain various types
of Menus within them. These screens are used in the program to allow the user to specify options and choose
which type of Juggling training they would like to try. The GameplayScreen class is used for the more important
Screens used in the program, such as the mini games and main game-play screen.
16
Figure 4.4: GameScreen Class Diagram
4.2.2
Screen Management
In order to manage the all the screens within a game, the framework provides a class named ScreenManager.
The main game class contains an instance of a ScreenManager, which each class in the game is able to access.
This allows screens such as the options screen to remove itself and add a new screen (for example when the user
leaves the options screen to re turn to the main menu). The ScreenManager stores all of the screens that are
currently open (because some may be hidden), and screens which are to be updated 9as there may be more than
one requiring updating), this enables the ScreenManager class to forwards the updates to the relevant Screen so
that it can carry out it’s own work. The ScreenManager class diagram is shown in Figure 4.5.
Figure 4.5: ScreenManager Class Diagram
17
4.3
Events
In order to identify when a user is juggling in front of the Kinect Sensor, the program processes the information
available to it and throws Events based on what the user is doing in front of it. The main types of Events that the
program detects are discussed in this section.
4.3.1
Detecting Events
In order to detect to detect events various calculations are done with regards to the users hands the juggling balls
that have been detected. The areas around the users hand are separated into two distinct areas, the Hand Zone
and the Throw Zone.These zones are shown in Figure 4.6
Figure 4.6: Zones around one of the user’s hands
The Hand Zone is used so that any juggling ball that has been detected within this area can be ignored. If the
Hand Zone is not ignored, then the system will, from time to time, detect the user’s hands or fingers as juggling
balls which is undesirable. The system is still able to track approximately how many juggling balls a user has in
his hands, as the system will know how many balls are being used and, in a perfect scenario, know exactly how
many catches and throws have taken place for each hand.
The Throw Zone is used to detect both ThrowEvents and CatchEvents. A ThrowEvent occurs when
a juggling ball is seen inside the Throw Zone and is then seen outside the Throw Zone in a subsequent Frame.
In contrast a CatchEvent occurs when a juggling ball is seen outside of the Throw Zone, and then inside the
Throw Zone in a subsequent Frame.
Each of the user’s hands is surrounded by these Zones so that throws and catches can be seen for both hands.
4.3.2
Event Hierarchy
Due to the similarities present between the different types of Events, the hierarchy shown in Figure 4.7 was used.
The base class (Event) contains the two variables that every single Event must have. These are a position (on
the screen) and a time the Event was seen (used for Shannon’s Uniform Juggling equation). This base class is
then sub-classed into one of two possible classes, a HandEvent or a BallEvent.
A HandEvent is an Event that involves one of the user’s hands (for example throwing or catching a ball) and
the HandEvent class provides a variable in which to store which hand was involved in this Event (using the Hand
enumerated type).
18
Figure 4.7: Events Class Diagram
A BallEvent is an Event that involves only a Ball. A PeakEvent covers the situation where a ball changes its
vertical direction from up to down, and a DropEvent covers when a ball is seen below the lowest of the user’s
hands. Both of these Events do not need any information regarding what hands are involved in the Event, they
are only concerned with the position of the juggling ball when the Event is detected.
4.3.3
Event Delegation
Due to the method by which each Frame is seen and processed by the program, it is necessary for the handling
of these Events to be delegated to a class other than the one they are detected in. This is because different mini
games will handle events in different ways. This allows each mini game to have complete control over what work
must be done when a juggling Event is detected. To ensure that the Events that are created are accurately timed,
a .NET Stopwatch object is used, as this type of timer has very high precision (of order nanoseconds) [21], as the
program uses milliseconds when timing Events this was deemed a suitable choice.
19
4.3.4
Event Pattern Matching
When performing a juggle there is always a pattern to be found (unless of course the user is just randomly
throwing balls and catching them). The system is designed to recognise one, two and three ball patterns (the
three ball pattern the system recognises is the three ball cascade pattern). The two ball (top) and three ball
cascade (bottom) patterns are given in Figure 4.8 .
Figure 4.8: 2 and 3 Ball Event Patterns
Whilst the user is juggling infront of the system, the system continually builds up a list of Events that it has
seen. It is possible, that due to the amount of work this system is doing to detect these Events, that it will have
missed Events. Consider the situation given in Figures 4.9 and 4.10.
Figure 4.9: Perfect 3 Ball Cascade Event Pattern
Figure 4.10: Events seen by the system
Comparing the two Figures, it can be seen that the system has missed a Peak, a Right Throw and a Left
Catch. In order to handle these situations the system contains three different kinds of EventAnalysers. The
20
purpose of an EventAnalyser is to take the list of Events that the system has seen and infer all the Events
that it has missed to match the pattern (for the current number of juggling balls). Each EventAnalyser will
work in the same way, but will be used to recognise different Event patterns. Due to this the design heirarchy
(shown in Figure 4.11) was chosen.
Figure 4.11: EventAnalyser Heirarchy
By making the EventAnalyser an abstract class, the patterns to be found (and some other properties) can
be stored in the EventAnanlyser class, and the logic for forcing a list of Events into these patterns can be
separated into each of the different concrete classes.
The logic behind the concrete EventAnalyser classes are discussed in the Implementation and Evaluation
sections.
21
Chapter 5
Implementation
This chapter describes the low-level implementation details for the JugEm system, focusing particularly on key
software engineering challenges and the solutions that were devised for them. Firgure 5.1 shows the sequence of
data processing tasks implemented in the JugEm system. The rest of this chapter describes each task in detail.
Figure 5.1: Pipeline of Data Processing Tasks
5.1
Kinect Data Input
This section will discuss the raw data that is receieved from the Kinect Sensor.
5.1.1
Depth Data
The raw depth data that is retrieved from the Kinect Sensor is of the form DepthImagePixel[]. This array
is a 2 dimensional array (similar to the one as shown in Figure 5.2) flattened into a 1 dimensional array.
Figure 5.2: 2D Array
Each index into this array stores a DepthImagePixel which stores two pieces of information, the distance
from the sensor of this pixel (depth) and whether or not the Kinect Sensor considers this pixel to be part of a
22
player. If the Kinect Sensor considers this pixel to be part of a player, the PlayerIndex field will be set to a
number between 1 and 7 (other wise it will be 0). The Kinect Sensor can only track a maximum of 8 players at
any given time. The structe of each DepthImagePixel is given in Figure 5.3.
Figure 5.3: Depth Pixel Strucutre
Prior to the Kinect SDK v1.6 upgrade, the depth data was provided in the form of a short[], the same
information was contained within this array, however bit shifting was required in order to separate the fields. The
Kinect SDK v1.6 introduced the new class of DepthImagePixel, which has the structure mentioned above,
and due to its class level members, eliminates the need to bit shift.
5.1.2
Color Data
The Kinect Sensor provides its raw color data in a similar way to the depth data. It provides the data in the form
of a byte[], which is a flattened 2D array (similar to the array discussed previously for depth data). Each pixel
in the color image from the camera is represented as four bytes in the flattened array (as shown in Figure 5.4)
Figure 5.4: Color Pixel Structure
These values represent the Red, Green, Blue and Alpha Transparency values for each pixel in the image. As
the Kinect Sensor supports various different image formats, this format can be different depending on the format
chosen, the JugEm system uses the RGB format (with a resolution of 640x480). This means that the size of the
byte[] will be (width ∗ height ∗ 4) and the index of pixel (x, y) will be at indices ((x ∗ y ∗ 4) to (x ∗ y ∗ 4) + 3)
incluvsive.
In order to process this information in an appropriate manner, the system carries out four main tasks, which
are disccussed in the remainder of this chapter.
5.2
Depth Image Processing
On each call to the current game screens Update method, the latest depth data is copied into an array of
DepthImagePixels. Each element in this array stores the distance that each pixel in the image is from
the sensor. The Kinect Sensor hardware contains algorithms that run Skeletal Recognition on the the data it
generates. As a result, each element in the DepthImagePixel[] also contains whether or not the hardware
considers that pixel to be part of a player or not. During initial prototypes that were built to work with the Kinect
Sensor, it was found that the Sensor regards a ball that has been thrown from a players hand to be part of the
player (even after it has left the hand). Given this fact, the depth data that is retrieved from the sensor at each
23
Update call is filtered to remove any pixel that the sensor does not consider to be part of a player. The results
of carry out this kind of filter are given in Figure 5.5.
Figure 5.5: Filtering player from depth data
Figure 5.5 shows (from left to right), the color image from the Sensor, the raw depth data from the Sensor,
the raw depth data converted to a gray-scale image, and finally the result of removing all non player pixels from
this image.
The method that is responsible for this filtering is given in the following Code Listing. The Helper.GetDepths
method uses a parallel for loop to iterate over all the DepthImagePixels in the DepthImagePixel[] that
has been retrieved from the sensor. It calculates the the index in the 1 dimensional array (line 6) using the 2
dminesional coordinates for the pixel in the image.
The DepthImagePixel at this index is then checked to see if it contains a player or not (lines 10 -13). If
this depthImagePixel contains a player, then the value of the depth is passed to the
CalculateIntensityFromDepth method, this method is used to smooth the color that is applied to the
player pixels in the depth data. The smaller the depth from the Sensor, the more white the pixel will appear (as
can be seen in the final image in Figure 5.5).
The byte[] that this method returns is used to create the grayscale image shown at the end of Figure 5.5.
1
3
5
7
public s t a t i c byte [ ]
int height )
{
/ / width
Parallel
{
for
{
GetDepths ( b y t e [ ] depths , DepthImagePixel [ ] d e p t h P i x e l s , i n t width ,
s h o u l d be 640 s o b e s t t o p a r r a l e l t h e w i d t h a r e a
. F o r ( 0 , w i d t h , i =>
( i n t j = 0 ; j < h e i g h t ; j ++)
i n t rawDepthDataIndex = i ∗ h e i g h t + j ;
9
i f ( d e p t h P i x e l s [ rawDepthDataIndex ] . PlayerIndex > 0)
depths [ rawDepthDataIndex ] = C a l c u l a t e I n t e n s i t y F r o m D e p t h (
d e p t h P i x e l s [ r a w D e p t h D a t a I n d e x ] . Depth ) ;
else
depths [ rawDepthDataIndex ] = 0;
11
13
}
}) ;
return depths ;
15
17
}
src/GetDepths.cs
The resulting image from Figure 5.5 then has a threshold value applied to it to remove any players pixels that
are further away than a certain threshold distance. This thresholding is done in the EmguCV library, the formula
it uses is as follows:
24
dst(x, y) =
0
maxV al
if src(x, y) > thresholdV alue
otherwise
This threshold value is calculated on each Update call, and is set to be the depth of a players wrists (the
depth of which is obtained from the Skeleton data that can be retrieved from the Kinect Sensor). The resulting
image for this thresholding is shown in Figure 5.6.
Figure 5.6: Applying threshold to filtered depth data
This image (represented as a byte[]) is filtered in line 5 of the code listing in 5.3.1. This method is passed
the current threshold value (from the players wrist depth) and it makes a call to an EmguCV method called
cvThreshold. The cvThreshold method runs a Binary Inverse Threshold on the image, meaning that and pixel
value that is greater than the threshold is set to 0 and any pixel value that is less than the threshold is set to black
(as the maxValue parameter passed to the method is 255).
5.3
5.3.1
Ball Processing
Ball Detection
After the depth data from the Kinect Sensor have been filtered to produce the image in 5.6, the DetectBalls
method is able to detect the juggling balls from the image. This done using EmguCV library, which runs Canny
Edge Detection on the image. The formula behind Canny Edge detection is discussed in John Canny’s paper on
the subject [5]. The Canny edge detection is carried out on line 6 of the following code listing.
The resulting edges are then processed to try and identify juggling balls and ignore things that are not wanted.
If any of the edges are found to be inside the Hand Zone (as discussed in the Design section), then they are
ignored. This is to try and prevent the system from incorrectly detecting the users hands as juggling balls. This
is carried out in lines 17 - 24 of the code listing.
The DetectBalls method is coded to find juggling balls of a specific radius. It does this using a specific radius value to find, and a threshold value that the radius can be within (named expectedRadius and
redThres respectively). For example a radius of 3 and a threshold of 2, would find circles of radius 1 through
5. Any circles that are found to be within these values are deemed to be juggling balls. Line 27 is used to ensure
that only circles with these radii are processed further.
In order to correctly detect juggling balls in flight, it is not enough to just try and find circular edges that have
been detected. When a juggling ball is in flight it can appear to the program as more of a capped ellipse shape.
The edges are processed to find all circles and capped ellipses (checked in line 29), and circles detected very
close to the users hands are ignored.
25
Any edges that pass this processing are added to the List<JugglingBall> which is returned by the
method (line 45). All the properties that can be calculated for the juggling ball that has be found are set and the
JugglingBall is added to the list (lines 34-39).
1
3
5
7
9
11
13
p u b l i c L i s t <J u g g l i n g B a l l > D e t e c t B a l l s ( b y t e [ ] depthMM , i n t t h r e s h o l d D e p t h , i n t w i d t h , i n t
h e i g h t , V e c t o r 2 LHPos , V e c t o r 2 RHPos , S t o p w a t c h gameTimer )
{
.......
CvInvoke . c v T h r e s h o l d ( d s m a l l . P t r , depthMask . P t r , d e p t h B y t e , 2 5 5 , Emgu . CV . CvEnum . THRESH .
CV THRESH BINARY INV ) ;
.......
Image<Gray , Byte> e d g e s = d e p t h M a s k B l o c k . Canny ( 1 8 0 , 1 2 0 ) ;
.......
/ / f o r each contour found
f o r ( C o n t o u r <System . Drawing . P o i n t > c o n t o u r s = e d g e s . F i n d C o n t o u r s ( ) ; c o n t o u r s ! = n u l l ;
c o n t o u r s = c o n t o u r s . HNext )
{
/ / c a l c u l a t e t h e b o u d i n g box x and y c e n t e r
i n t b b x c e n t = c o n t o u r s . B o u n d i n g R e c t a n g l e . X + c o n t o u r s . B o u n d i n g R e c t a n g l e . Width / 2 ;
i n t bbycent = c o n t o u r s . BoundingRectangle .Y + c o n t o u r s . BoundingRectangle . Height / 2;
b y t e bbcentVal = depthMaskBlock . Data [ bbycent , bbxcent , 0 ] ;
15
/ / make s u r e t h a t t h e b a l l i s n o t t o o c l o s e t o t h e hand p o s i t i o n s g i v e n
i f ( H e l p e r . P o i n t N e a r H a n d ( RHPos , b b x c e n t ∗ 2 , b b y c e n t ∗ 2 , t h r e s h o l d D e p t h ) )
{
continue ;
}
e l s e i f ( H e l p e r . P o i n t N e a r H a n d ( LHPos , b b x c e n t ∗ 2 , b b y c e n t ∗ 2 , t h r e s h o l d D e p t h ) )
{
continue ;
}
.......
/ / i f t h e r a d i u s f o u n d i s b e t w e e n e x p e c t e d r a d i u s and t h r e s h o l d s
i f ( ( a c t u a l R a d i u s < e x p e c t e d R a d i u s + r a d T h r e s ) && ( a c t u a l R a d i u s > e x p e c t e d R a d i u s −
radThres ) )
{
/ / i f a r e a i s g r e a t e r than or e q u a l s too capped e l l i p s e
i f ( c o n t o u r s . Area >= box . s i z e . Width ∗ box . s i z e . H e i g h t ∗ Math . P I / 4 ∗ 0 . 9 )
{
/ / s e t t h e l e f t and r i g h hand d i s t a n c e t o s a v e r e c a l c u l a t i n g i t l a t e r
/ / and s o i t i s s e t f o r e v e r y b a l l r e l a t i v e t o i t s c u r r e n t f r a m e
f l o a t b a l l L H D i s t a n c e = V e c t o r 2 . D i s t a n c e ( new V e c t o r 2 ( b b x c e n t ∗ 2 , b b y c e n t ∗ 2 ) ,
LHPos ) ;
f l o a t b a l l R H D i s t a n c e = V e c t o r 2 . D i s t a n c e ( new V e c t o r 2 ( b b x c e n t ∗ 2 , b b y c e n t ∗ 2 ) ,
RHPos ) ;
17
19
21
23
25
27
29
31
33
35
37
J u g g l i n g B a l l b a l l = new J u g g l i n g B a l l ( new V e c t o r 2 ( b b x c e n t ∗ 2 , b b y c e n t ∗ 2 ) ,
a c t u a l R a d i u s , b a l l L H D i s t a n c e , b a l l R H D i s t a n c e , gameTimer . E l a p s e d M i l l i s e c o n d s
);
39
b a l l s . Add ( b a l l ) ;
}
41
}
}
43
}
s t o r a g e . Dispose ( ) ;
return balls ;
45
}
src/DetectBalls.cs
26
5.3.2
Color Detection
Once the balls have been located in the depth data and returned in the form of a list, the list is processed so
that each JugglingBall’s color can be determined. In order to do this, the programs makes use of the color
data produced by the Kinect Sensor. As the juggling balls are detected in the depth data, their location must be
translated to the color data (as they may be different resolutions). A method for doing this is provided in the
Kinect SDK [20], however, it sometimes returns a sentinel value if the position cannot be translated. Providing
the position can be translated, the color data from the Kinect Sensor can be processed to find out the color of the
ball.
To detect the color of a ball, the program takes a small area of pixels around the JugglingBalls center
point (which was set by the DetectBalls method), and works out the average color of these pixels. In initial
implementations of the program, color detection was done in the Red, Green and Blue color space. This involved
just directly using the information received from the Kinect Sensors color data. However, using this color space
proved to be inaccurate, and an alternative was sought. In order to test out methods for doing this color detection,
a small rapid prototype was built to test out various methods of image processing that could result in more
accurate color detection.
This rapid prototype found that in order to detect the color more accurately, using the Hue, Saturation and
Value color space was far more accurate. This method has the caveat that it means a slight increase of the
amount of processing that is done in the program, however the gains it gave accuracy far outweighed the minor
hit in processing (especially considering the EmguCV library was available to the program to so the processing).
Using the HSV color space means that the program requires a minimum and maximum value for each of the
color channels (Red, Blue and Green), as it filters the image in all three colors and takes the average pixel color
of the pixels (the higher the value the more of that color is present in the ball).
A screenshot of the images retrieved from the prototype is given in Figure 5.7, and shows the original image (on the left) filtered three times (once for each of Red, Green and Blue). Unfortunately this color detection
method is very much light dependant. The values used for this filter are only really applicable in the surroundings
that the original image was taken. It can also be noted form the images that, in some surroundings, it is harder to
detect certain colors (in the images, Red detection isnt great), this is somtimes counter balanced by the fact that
the other colors show up better and can eliminate this issue (this is not always the case though).
Figure 5.7: HSV Filter Prototype
The color detection algorithm uses EmguCV methods, and uses the Hue, Saturation, Value colorspace. The
method creates a color image from the color data taken from the Kinect Sensor (in the pixels byte[], line
4). This image is then set to be only the area around the ball that we are interested in(line 6, this helps to reduce
the amount of work the system has to do when working with this image later in the method) and creates a Hue,
Saturation, Value image from the result (line 8). The method creates three blank masks so that the results of each
filter can be processed separately (lines 10-12).
This image is then filtered three times (lines 21-23); one showing only Red pixels, one showing only Blue
27
pixels and one showing only Green pixels. The average pixel value of each of these three images is taken, and
the image with the highest average is chosen to be the color of the ball (lines 26-47).
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
p u b l i c S t r i n g DetectBallColorHSV ( b y t e [ ] p i x e l s , i n t width , i n t h e i g h t , i n t ballX , i n t
ballY )
{
/ / c r e a t e a c o l o r image r e p r e s e n t a t i o n o f t h e p i x e l d a t a
bpImg . B y t e s = p i x e l s ;
/ / S e t o u r R e c t a n g l e o f I n t e r e s t a s we d o n t c a r e a b o u t most o f t h e image
bpImg . ROI = new System . Drawing . R e c t a n g l e ( b a l l X − 8 , b a l l Y − 8 , 1 5 , 1 5 ) ;
/ / c o n v e r t t h e image t o t h e Hue , S t a u r a t i o n , V a l u e c o l o r s p a c e
hsvImg = bpImg . C o n v e r t <Hsv , Byte >() ;
/ / c r e a t e t h e r e d , g r e e n and b l u e masks f o r e a c h f i l t e r
redMask = hsvImg . C o n v e r t <Gray , Byte >() . CopyBlank ( ) ;
greenMask = hsvImg . C o n v e r t <Gray , Byte >() . CopyBlank ( ) ;
blueMask = hsvImg . C o n v e r t <Gray , Byte >() . CopyBlank ( ) ;
/ / c r e a t e the range v a l u e s f o r each f i l t e r
MCvScalar redMin = new MCvScalar ( 1 1 8 , 2 3 9 , 0 ) ;
MCvScalar redMax = new MCvScalar ( 2 2 9 , 2 7 1 , 1 9 8 ) ;
MCvScalar g r e e n M i n = new MCvScalar ( 0 , 0 , 2 5 ) ;
MCvScalar greenMax = new MCvScalar ( 1 3 7 , 1 2 1 , 1 0 8 ) ;
MCvScalar b l u e M i n = new MCvScalar ( 8 4 , 1 2 0 , 0 ) ;
MCvScalar blueMax = new MCvScalar ( 1 5 6 , 2 7 1 , 1 8 9 ) ;
/ / apply each of the f i l t e r s
CvInvoke . c v I n R a n g e S ( hsvImg , redMin , redMax , redMask ) ;
CvInvoke . c v I n R a n g e S ( hsvImg , greenMin , greenMax , greenMask ) ;
CvInvoke . c v I n R a n g e S ( hsvImg , blueMin , blueMax , blueMask ) ;
/ / c a l c u l a t e the average p i x e l c o l o u r of each channel
/ / t h e h i g h e r t h e a v e r a g e t h e more l i k e l y t h a t we a r e t h a t c o l o u r
i n t blueAvg = 0 ;
f o r e a c h ( b y t e b i n blueMask . B y t e s )
{
blueAvg += ( i n t ) b ;
}
blueAvg = blueAvg / blueMask . B y t e s . L e n g t h ;
32
i n t redAvg = 0 ;
f o r e a c h ( b y t e b i n redMask . B y t e s )
{
redAvg += ( i n t ) b ;
}
redAvg = blueAvg / redMask . B y t e s . L e n g t h ;
34
36
38
40
i n t greenAvg = 0 ;
f o r e a c h ( b y t e b i n greenMask . B y t e s )
{
greenAvg += ( i n t ) b ;
}
greenAvg = greenAvg / greenMask . B y t e s . L e n g t h ;
42
44
46
r e t u r n H e l p e r . C o l o r C h o o s e r ( redAvg , greenAvg , blueAvg ) ;
48
}
src/DetectBallColor.cs
The color of the ball is set for each JugglingBall that is found, these changes mean that the list of balls
that has been generated by DetectBalls method, now all have position, location and a color associated with
them (as well as their distance to both of the users hands).
28
5.4
Event Detection
5.4.1
Frames and Frame Processing
Once the data from the Kinect Sensor has been processed and the juggling balls have been detected and assigned
a color, they are added into a Frame to be processed for detecting Events. The Frame class contains all the
necessary information to identify a user juggling.
All of the Frames seen by the program are added into a FrameList. When they are added into this list,
they are compared against the previous Frame that was added to determine if one of the detected Events has
taken place.
2
4
c l a s s Frame
{
/ / l i s t of j u g g l i n g b a l l s t h a t a r e in t h i s frame
p u b l i c L i s t <J u g g l i n g B a l l > b a l l s ;
6
/ / p o s i t i o n o f t h e l e f t hand
public Vector2 LHPosition ;
8
/ / p o s i t i o n o f t h e r i g h t hand
publ ic Vector2 RHPosition ;
10
12
/ / t i m e t h i s f r a m e was c r e a t e d
public double time ;
14
....
16
}
src/Frame.c
Comparing each Frame created with the previous Frame that was seen allows the program to calculate
various extra properties about the flights of the juggling balls that have been detected.
Given that each JugglingBall is assigned a color, and the user is juggling with balls that are all different
colors, it is possible to compare a ball in one Frame with the correct ball in the previous Frame. Given all
the information contained in two sucessive Frames, it is also possible to determine the direction that a ball is
traveling in (by comparing its previous position with its latest position).
The first and second Frames that are added to a FrameList are special cases. In the first Frame, we cannot
calculate the ball’s direction as there is no previous Frame present, so the program just inserts the Frame to
the FrameList without any extra work. However, once the second Frame is added we can calculate the ball’s
direction given this new Frame. The program will calculate all the ball’s direction for the second Frame and
also set the same ball’s direction in the first Frame. Assigning the ball’s direction for the third Frame and all
remaining Frames works in a similar manner by comparing the new Frame with the previous one.
Once a direction has been assigned to each ball, we have access to a collection of juggling balls with both
color and the direction, and also the position of both of the users hands.
29
5.4.2
Detecting Events Using Frames
As each Frame is added to the list, it is checked against the previous one to detect if a ball’s vertical direction
has changed (called a PeakEvent) and also if the user has thrown or caught a ball (called ThrowEvent and
CatchEvent respectively). The methods for determining these events are discussed in this section.
Peak Event Detection
The FrameList.CheckPeak method is repsonsible for detecting PeakEvents. A PeakEvent occurs
when a balls vertical direction changes (found by comparing the JugglingBalls current y position with its
previous one, lines 5-6). The method then sets the direction of the the most recent ball (currentBall, lines
8-9).
FrameList.CheckPeak is called every time a newly found JugglingBall is being compared to its
couterpart in the previous Frame. The method is passed the two JugglingBalls that are to be compared, and
also the time (gameTime) that the new ball was seen (this means that when it detects the PeakEvent the time
it is detected will not be affected by the time this method took to detect the event). Lines 10-12 ensure that the
the same PeakEvent is not thrown twice in a row. If the currentBall has not changed its vertical direction,
it is assigned the same direction as the previousBall (lines 20-24).
When this is detected, a new PeakEvent is fired (which is handled in the GameScreen that is currently
active). A PeakEvent contains the location of the ball as it changed direction and the time that the event was
fired.
2
4
6
8
10
12
14
16
p u b l i c v o i d CheckPeak ( J u g g l i n g B a l l c u r r e n t B a l l , J u g g l i n g B a l l p r e v i o u s B a l l , l o n g gameTime )
{
/ / i f t h e b a l l h a s moved down s i n c e l a s t f r a m e and i t s p r e v i o u s d i r e c t i o n was up
/ / we h a v e a p e a k e v e n t
i f ( c u r r e n t B a l l . p o s i t i o n .Y > p r e v i o u s B a l l . p o s i t i o n .Y
&& p r e v i o u s B a l l . v e r t i c a l D i r e c t i o n == BALL DIRECTION . Up )
{
c u r r e n t B a l l . v e r t i c a l D i r e c t i o n = BALL DIRECTION . Down ;
currentBall . horizontalDirection = previousBall . horizontalDirection ;
i f ( B a l l E v e n t != n u l l )
{
i f ( ( l a s t E v e n t T h r o w n i s P e a k E v e n t ) == f a l s e )
{
B a l l E v e n t ( t h i s , new B a l l E v e n t A r g s ( new V e c t o r 2 ( c u r r e n t B a l l . p o s i t i o n . X,
c u r r e n t B a l l . p o s i t i o n . Y) , E v e n t T y p e . Peak , gameTime ) ) ;
l a s t E v e n t T h r o w n = new P e a k E v e n t ( c u r r e n t B a l l . p o s i t i o n , gameTime ) ;
}
18
}
}
else
{
20
22
/ / b a l l h a s n t changed d i r e c t i o n , j u s t t a k e t h e p r e v i o u s d i r e c t i o n
currentBall . verticalDirection = previousBall . verticalDirection ;
currentBall . horizontalDirection = previousBall . horizontalDirection ;
24
}
26
}
src/CheckPeak.cs
30
Throw Event Detection
The FrameList.CheckHand method is responsible for detecting both ThrowEvents and CatchEvents
as in theory they are caught in very similar (but opposite) conditions. The CheckHand method is passed
the JugglingBall from the most recent frame (currentBall) and the ball from the previous frame
(previousBall) and also the Hand that is to be checked. Again similar to the CheckPeak method, it is
given the time the JugglingBall was seen so that the Event is fired with the correct time.
Depending on the hand parameter the method is passed, it will use the correct hand distances for its calculations (lines 3-16).
A ThrowEvent occurs when a ball is travelling away from the hand in question (line 19). In order to detect
this event, each hand has an area around it (called the Throw Zone, shown in Figure 4.6). The method checks
to determine whether the previousBall was inside the Throw Zone (line 21) and that the currentBall
is outside the Throw Zone (line 23). Lines 26-43 deal with firing the ThrowEvent and ensure that the same
ThrowEvent is not thrown twice in a row.
This method of detection does have its flaws as the ball has to be seen twice. Once inside the Throw Zone
and once outside, and the ThrowEvent will not be fired until the ball has left the Throw Zone (meaning a slight
delay in timing), however it was developed this way to try and improve the programs ability to handle dropped
Frames.
A ThrowEvent contains the position of the ball that was thrown (just as it left the Throw Zone), the hand
involved in the throw and the time the ThrowEvent was fired.
2
4
6
8
10
12
14
16
18
20
22
24
26
28
30
p u b l i c v o i d CheckHand ( J u g g l i n g B a l l c u r r e n t B a l l , J u g g l i n g B a l l p r e v i o u s B a l l , Hand hand , l o n g
gameTime )
{
double c u r r e n t D i s t a n c e ;
double previousDistance ;
/ / g e t t h e c o r r e c t d i s t a n c e s f o r t h e g i v e n Hand
i f ( hand == Hand . L e f t )
{
c u r r e n t D i s t a n c e = c u r r e n t B a l l . LHDistance ;
p r e v i o u s D i s t a n c e = p r e v i o u s B a l l . LHDistance ;
}
else
{
c u r r e n t D i s t a n c e = c u r r e n t B a l l . RHDistance ;
p r e v i o u s D i s t a n c e = p r e v i o u s B a l l . RHDistance ;
}
.......
/ / check f o r throws
else if ( currentDistance > previousDistance )
{
i f ( p r e v i o u s D i s t a n c e < ( H e l p e r . HAND SIZE + H e l p e r . THROW ZONE SIZE ) )
{
i f ( c u r r e n t D i s t a n c e > ( H e l p e r . HAND SIZE + H e l p e r . THROW ZONE SIZE ) )
{
/ / THROW
i f ( HandEvent ! = n u l l )
{
/ / i f t h e l a s t e v e n t t h r o w n i s n o t a hand e v e n t
i f ( ( l a s t E v e n t T h r o w n i s ThrowEvent ) == f a l s e )
{
31
/ / p a s s t h e e v e n t hand s o i t c a n d i s t i n g u i s h b e t w e e n r i g h t and l e f t
HandEvent ( t h i s , new HandEventArgs ( c u r r e n t B a l l . p o s i t i o n , hand ,
E v e n t T y p e . Throw , gameTime ) ) ;
l a s t E v e n t T h r o w n = new ThrowEvent ( c u r r e n t B a l l . p o s i t i o n , hand ,
gameTime ) ;
32
34
}
/ / o r i f l a s t e v e n t was a hand e v e n t from a d i f f e r e n t hand
e l s e i f ( ( ( ThrowEvent ) l a s t E v e n t T h r o w n ) . hand ! = hand )
{
/ / p a s s t h e e v e n t hand s o i t c a n d i s t i n g u i s h b e t w e e n r i g h t and l e f t
HandEvent ( t h i s , new HandEventArgs ( c u r r e n t B a l l . p o s i t i o n , hand ,
E v e n t T y p e . Throw , gameTime ) ) ;
l a s t E v e n t T h r o w n = new ThrowEvent ( c u r r e n t B a l l . p o s i t i o n , hand ,
gameTime ) ;
}
36
38
40
42
}
}
44
}
}
46
}
src/CheckThrow.cs
Catch Event Detection
A CatchEvent is detected in a similar way to a ThrowEvent. However, for a catch, the ball must be detected
outside the Throw Zone of a hand and then inside the Throw Zone in successive Frames.
The code listing below shows the if statement that fits into the left out code of the code listing above
(taken from the FrameList.Checkhand method. The logic behind detecting a CatchEvent is similar to a ThrowEvent except that the ball must be travelling towards the hand in question (line 2) and the
previousBall must be seen outside the Throw Zone (line 4) and the currentBall inside the Throw Zone
(line 6). Lines 19-34 again ensure that the same CatchEvent is not thrown twice in a row.
The slight difference in the CatchEvent detection is that, if a ball is caught, its direction is set to be in the
hand that has caught it (lines 8-18).
A CatchEvent contains the position of the ball that was caught (just as it entered the Throw Zone), the
hand involved in the catch and the time the CatchEvent was fired.
2
4
6
8
10
12
14
16
/ / check f o r c a t c h e s
if ( currentDistance < previousDistance )
{
i f ( p r e v i o u s D i s t a n c e > ( H e l p e r . HAND SIZE + H e l p e r . THROW ZONE SIZE ) )
{
i f ( c u r r e n t D i s t a n c e < ( H e l p e r . HAND SIZE + H e l p e r . THROW ZONE SIZE ) )
{
/ / CATCH
i f ( hand == Hand . L e f t )
{
c u r r e n t B a l l . v e r t i c a l D i r e c t i o n = BALL DIRECTION . L e f t H a n d ;
c u r r e n t B a l l . h o r i z o n t a l D i r e c t i o n = BALL DIRECTION . L e f t H a n d ;
}
else
{
c u r r e n t B a l l . v e r t i c a l D i r e c t i o n = BALL DIRECTION . R i g h t H a n d ;
32
c u r r e n t B a l l . h o r i z o n t a l D i r e c t i o n = BALL DIRECTION . R i g h t H a n d ;
}
i f ( HandEvent ! = n u l l )
{
/ / i f t h e l a s t e v e n t t h r o w n i s n o t a hand e v e n t
i f ( ( l a s t E v e n t T h r o w n i s C a t c h E v e n t ) == f a l s e )
{
/ / p a s s t h e e v e n t hand s o i t c a n d i s t i n g u i s h b e t w e e n r i g h t and l e f t
HandEvent ( t h i s , new HandEventArgs ( c u r r e n t B a l l . p o s i t i o n , hand ,
E v e n t T y p e . Catch , gameTime ) ) ;
l a s t E v e n t T h r o w n = new C a t c h E v e n t ( c u r r e n t B a l l . p o s i t i o n , hand , gameTime )
;
}
/ / o r i f l a s t e v e n t was a hand e v e n t from a d i f f e r e n t hand
e l s e i f ( ( ( C a t c h E v e n t ) l a s t E v e n t T h r o w n ) . hand ! = hand )
{
/ / p a s s t h e e v e n t hand s o i t c a n d i s t i n g u i s h b e t w e e n r i g h t and l e f t
HandEvent ( t h i s , new HandEventArgs ( c u r r e n t B a l l . p o s i t i o n , hand ,
E v e n t T y p e . Catch , gameTime ) ) ;
l a s t E v e n t T h r o w n = new C a t c h E v e n t ( c u r r e n t B a l l . p o s i t i o n , hand , gameTime )
;
}
}
18
20
22
24
26
28
30
32
34
36
}
}
38
}
src/CheckCatch.cs
5.4.3
Missed Events
Processing the FrameList in this manner, will lead to a stream of these Events. Given the amount of work
going on in the background to get to the point of detecting Events (i.e getting all the necessary information out
of the raw data from the Kinect Sensor), it is possible for Frames to be missed. When a Frame is missed the
information required is not retrieved from the Kinect Sensor (as it is still processing the previous Frame). It is
entirely possible that missing certain frames could lead to missed Events.
In order to try and minimize these missed Events, the Throw and Catch Event detection only needs to see the
ball anywhere inside the Throw Zone and anywhere outside the Throw Zone. This means that if some frames
are missed in between, the program will still detect the Events as long as it has one where the ball is inside the
Throw and one where it is outside.
Particularly on slower machines, the system can still miss Events. For example, if a ball is seen outside the
Throw Zone of a hand in one frame, and then by the time the next Frame is seen the ball has been caught, then
the program will have missed the CatchEvent it was supposed to detect. There is very little that can be done
about missing Events in this manner, however once all of the Events for a juggle have been created, the missed
Events can be inferred (discussed later).
The next section will discuss the logic behind dealing with missed Events. The logic used was developed
using Test Driven Development and, as a result, is not perfect (the tests used were unable to uncover an issue
when inferring missed Events). These issues and the tests used are discussed in the Evaluation section.
33
5.5
Processing Events and Pattern Matching
As the system continually updates, each Frame is created and Events are fired when the system detects a Throw,
Catch or Peak occurring. When a user is finished juggling, the system must use these Events (and the times that
they were created) to calculate how well the user has juggled (for example, using Shannon’s Uniform Juggling
equation).
As discussed previously, it is possible that the system has missed key events during its execution. In order to
deal with this the EventAnalyser class will infer what Events have been missed.
It is able to do this given that it knows the number of balls and the number of hands that have been used
when the user was juggling. As discussed previously, when a user is juggling there is always a pattern to the
Events fired which is unique to the number of balls and method of juggling (i.e cascade or shower). The logic
behind this task is carried out in the concrete EventAnanlyser classes. Figure 5.8 shows the methods that
each concrete EventAnalyser has.
Figure 5.8: EventAnalyser Class Diagram
Each EventAnalyser stores the pattern that has to be found in an EventType[]. As the pattern that
needs to be found depends on the first hand that throws, both the Event patterns are stored (in LEFT HAND PATTERN
and RIGHT HAND PATTERN respectively. With each pattern, there are also different numbers of Events between
each of the Uniform Juggle Equation’s properties (dwell, flight and vacant time), these are stored and are set when
a concrete class is created.
The main two methods of the EventAnalyser class are CheckEvents and ValidateStart (both
are abstract methods), each of these methods are discussed in the remainder of this section. Given the inherrent
complexity and differences between each of the different EventAnanlysers, the remaining code listings in
this chapter use pseudo code instead of the actual systems code, this is to give a general idea of how each of the
EventAnanlysers complete their tasks.
34
5.5.1
Validating The Start of a Pattern
In order to separate the concerns of Event pattern matching, the start of an Event pattern is validated with in a
spearate method called ValidateStart. The pseudo code for how this method proceeds is given below.
2
4
6
p r o t e c t e d o v e r r i d e L i s t <Event> V a l i d a t e S t a r t ( L i s t <Event> e v e n t L i s t , J u g g l e R e p o r t r e p o r t )
{
/ / i f f i r s t few e v e n t s o f e v e n t L i s t match p a t t e r n
/ / determine f i r s t throw
/ / setup patternToBeFound
// return
8
/ / f i n d th e f i r s t c a t c h event found
10
/ / u s e f i r s t c a t c h t o d e t e r m i n e which hand t h r e w f i r s t
/ / s e t u p p a t t e r n t o be f o u n d
12
// create
l i s t to s t o r e missed events
14
/ / b a s e d on i n d e x o f f i r s t c a t c h i n e v e n t L i s t
/ / i n s e r t t h e number o f m i s s e d e v e n t s
/ / u s e e x i s t i n g e v e n t s f o u n d i n s t e a d o f c r e a t i n g new o n e s
16
18
/ / i n f e r times of missed events
/ / combine m i s s e d e v e n t s and e v e n t l i s t
/ / r e t u r n the combination
20
22
}
src/ValidateStartPseudo.cs
Lines 3-6 deal with the situation where the Events at the start of eventList match the desired pattern.
In this case, the first throw is found, and patternToBeFound is set to the either LEFT HAND PATTERN or
RIGHT HAND PATTERN accordingly.
If the start of eventList does not match the desired pattern, then lines 8-13 are used to find the first
CatchEvent that was seen, using this Event they determine which hand threw the first ball and setup the
patternToBeFound accordingly.
Lines 13-17 use this first CatchEvent in order to insert the Events that the system has missed. The index
in eventList of the first catch tells the method how many Events have been missed. For example (in the 3 Ball
Pattern), if the first catch is at an index of 0, then the system has missed two ThrowEvents and a PeakEvent.
The important part of this method is that it does not infer events that is has seen (lines 15-17). For example
(again using the 3 Ball Pattern), if the first CatchEvent was at an index of 1, then the system could have
missed either a ThrowEvent and a PeakEvent or two ThrowEvents (there are more cases than this as the
hands involved in the ThrowEvents are important). In this case, the Event that is at index 0 must be kept by
the system and the two Events that are missing must be inferred. This leads to the ValidateStart having a
reasonably high number of if statements (thus increasing its cyclomatic complexity), however this is somewhat
unavoidable if the system is to not discard the Events it has seen infavour of building a completely inferred start.
The Events that are inferred are maintained in a separate list (Events that have been seen are also added to
this list should they need to be). When the list of missed Events has been built, the first CatchEvent is added
to the end of the list. When Events are inferred, their time is initally set to 0. Once the list of missed Events has
the first CatchEvent (with its known time) at the end, the list is passed to another method to infer all the times
of the missed Events. This method simply infers the time of any missed Events so that if a two known Events
35
are separated by many missed Events, then the missed Events all occur the same distanace apart. The method
that carries out the time inferrence is EventAnalyser.InferMissedTimes and is static and available to
all concrete classes of type EventAnalyser.
Once all the missed Events have had time inferrence carried out of them, they are added to the start of the
eventList and are returned (lines 19-21).
5.5.2
Matching the Pattern
The method behind the majority of the Event validating is CheckEvents. This method calls ValidateStart
before it does any work so that it can ensure that any Events missing from the start of the pattern are included
before it proceeds. Calling ValidateStart also ensures that the patternToBeFound has been setup correctly, this EventType[] is used throughout CheckEvents so that it knows when to use the Events it has
seen in eventList and when it should an infer a missed Event. The pseudo code for how the method proceeds
is given below. The pseudo code is a simplified version of the actual method used, this is due to the reasons
discussed in the Evaluation chapter.
2
p u b l i c o v e r r i d e v o i d C h e c k E v e n t s ( L i s t <Event> e v e n t L i s t , J u g g l e R e p o r t r e p o r t )
{
/ / v a l i d a t e the s t a r t of e v e n t L i s t
4
/ / c r e a t e empty l i s t o f E v e n t s t o r e t u r n ( m o d i f i e d E v e n t s )
6
/ / add t h e f i r s t E v e n t i n e v e n t L i s t t o t h i s
l i s t ( as i t i s not p a r t of the p a t t e r n )
8
/ / patternIndex = 0
10
/ / f o r ( i n t e v e n t L i s t I n d e x = 1 ; e v e n t L i s t I n d e x < e v e n t L i s t . Count ; e v e n t L i s t I n d e x ++)
{
/ / i f ( e == p a t t e r n T o B e F o u n d [ p a t t e r n I n d e x ] )
/ / add e t o m o d i f i e d E v e n t s
/ / p a t t e r n I n d e x ++
/ / e v e n t L i s t I n d e x ++
// else
/ / we n e e d t o i n f e r e v e n t ( s )
/ / i n f e r event a t patternToBeFound [ p a t t e r n I n d e x ]
/ / add i n f e r r e d E v e n t t o m o d i f i e d E v e n t s
/ / p a t t e r n I n d e x ++
/ / decrement e v e n t L i s t I n d e x so t h a t i t does not change f o r n e xt i t e r a t i o n
}
/ / i n f e r times modifiedEvents
12
14
16
18
20
22
24
}
src/CheckEventsPseudo.cs
Line 5 creates an empty list to store the final list of Events that will be returned, by the end it will contain both
Events that have been seen and inferred Events. Indices into both the patternToBeFound and eventList
arrays are kept to track of the Event we are processing and the current element of that pattern to match (lines
9-11). As the patternToBeFound does not contain the first throw (the patterns always start with the second
Event), the first Event is added to the list before the pattern matching begins (line 7).
The logic in lines 13-22 is simplified (due to the issues that will discussed in the Evaluation chapter), but the
idea behind what happens is listed in the pseudo code. Each Event in eventList is check to see if it matches
the current pattern element (line 13), if it does it is added to modifiedEvents and the indices incremented
36
(lines 14-16). If the Event does not match the pattern (line 17), then the current pattern element is used to infer
the missed Event and it is added to the list. The index into eventList is decremented so that it will remain
the same at the start of the next loop iteration (lines 18-22). Finally, given that modifiedEvents may contain
inferred Events (which will have a time of 0), the entire list has the times of the missed Events inferred (using
the existing Event times).
37
Chapter 6
Evaluation
This chapter will disucss how the system was evaluated. It will discuss which of the requirements have been met
and ciscuss future developments for the system. The User Evaluation that was carried out on the system will also
be discussed.
6.1
Requirements Evaluation
The Requirements that were discussed in the Requirements section are given in Table 6.1. The table has the
Requirements that have been met colored green, and the ones that were not implemented colored red.
Must Have
Detect user throwing a
ball
Detect user catching a ball
Detect a peak of a ball
Track users hand positions
Provide report on a juggling sessions
Ability to detect 1,2 and 3
ball patterns
Should Have
Calculate Shannon ratio
for a juggle
Suggestions on improving
juggling technique
Detect user dropping a
ball
Ability to detect 4 ball
patterns
Could Have
Tasks to improve timing
Would Like To Have
Ability to define patterns
using Siteswap notation
Tasks to improve throwing
As can be seen from the Table, 11 of the 13 Requirements have been implemented. All of the high priority
Requirements (Must Have) have been implemented, and only one of the Should Have Requirements has not been
implemented.
The system is designed such that, in order to recognise a 4 Ball Pattern the main class that would be needed
would be another concrete class of EventAnalyser that looks for the 4 Ball Pattern. Given the nature of the 4
Ball Pattern (juggling two balls in each hand independantly), the logic behind this class would have to be slightly
different (in comparison to the patterns that are currently implemented), but it would certainly be realisable given
the design of the system.
38
Implementing Steiswap notation would be a slightly trickier task. Given the design of the system, the pattern
entered by the user would have to be converted into an EventType[] and a concrete EventAnalyser
instantiated to use this as its patternToBeFound. Again this task would certainly be possible, however
the lgoci behind understanding the Siteswap pattern and converting it to an EventType[] would perhaps be
complex.
6.2
Test Driven Development
Test Driven Development (TDD) is a software development process that involves writing the software tests (that
will initially fail) before the actual code to carry out the task you wish the system to complete. Once the tests are
written, the code used by the test is written to ensure that the code performs the desired task (by re-running the
test which will should pass).
Given that the concrete classes of EventAnalyser have to deal with missed Events, the choice was made
to develop these classes using TDD. The tests used against these classes involved taken a perfect Event pattern
for the EventAnalyser being tested, and then programatically removing Events, running the test and then
validating the Event pattern that the EventAnalyser had built matched the perfect Event pattern.
Using this process uncovered a subtle issue with the logic behind inferring missed Events which is discussed
in the remainder of this section. Throughout this section, the 3 Ball Event pattern is used (showin Figure 6.1),
but the issue is present in all Event patterns. The numbers above each Event are the time at which each Event
happened.
Figure 6.1: Perfect 3 Ball Event Pattern with times
The tests used are given in the Appendices. The two methods that were the focus of the tests were the concrete
implementations of EventAnalyser.ValidateStart and EventAnalyser.CheckEvents. These
are the two methods that are responsible for inferring Events.
6.2.1
Inferring Event Times
Inferring the correct Event type (without regards to time) for each of the EventAnalysers works correctly,
the issue arises witn the timing of the missed Events. An example situation is given in Figure 6.2.
Figure 6.2: 3 Ball Event Pattern with 4 missed Events
It can be seen from the Figure that four Events have been missed (with times 5, 6, 7 and 8 respectively). The
first issue that was discovered druing the TDD of the EventAnalysers is that when processing this Event list,
39
the PeakEvent with time 8 was being added to modifiedEvents (as it was a pattern match), when in fact
there were missed Events before it. The missed Events were then not detected until the ThrowEvent at time
9 (as this did not match the desired pattern), the inferred Events were then inserted too late. This situation was
rectified by ensuring that before a pattern match is added to modifiedEvents, it is checked to ensure that it
is closer to the last Event seen that the next Event.
A further example of this issue is given in Figure 6.3.
Figure 6.3: 3 Ball Event Pattern with 4 missed Events
Again similar to the previous example, 4 Events have been missed (this time with times 7, 8, 9 and 10). This
example is not perfectly inferred by the EventAnalyser. The program will proceed until it reaches Event 6,
notice that there is a missing LeftCatch Event and insert an inferred LeftCatch, it will then incorrectly use
the Peak and insert inferred Events for the RightThrow, RightCatch and then a Peak when it should not
have inferred this Peak, it should have used the existing one. The pattern built by the ThreeballAnalyser
is show in Figure 6.4, with the inferred Events highlighted red.
Figure 6.4: 3 Ball Event Pattern with inferred Events
The pattern shown in this Figure clearly has incorrect times associated with the inferred Events (although the
inferred Event types are correct). Given the constraints for time on the project, this issue was left unresolved.
Part of the major problem resolving it is, when checking the pattern at Event 6 (in Figure 6.3), the only two
Events that can be used by the system are Events 5 and 11. It is not possible to use Event 12 as the system has
no way of knowing for sure that there are no missed Events between Event 11 and Event 12. A method was
implemeneted using the second Event in the future in its logic, but in practise this did not work, so the method
was returned to its current state and the issue remains.
As a result of this issue, the tests that were ran on the EventAnalysers use a boolean parameter on
whether or not to validate the timing of Events as well as the pattern, under some circumstances the system will
handle the inferring of times correctly (in the first example given in the section), however in others it will not.
6.3
System Evaluation
The system was evaluated by users by having them perform tasks with the system. Upon completion of the tasks,
users were asked to complete a brief questionnaire on their experience in order to gether their opinions on the
system.
The tasks given to the users are included in Appendix A, and a blank questionnaire is given in Appendix C.
The Juggling Tutorial used during the Evaluation was produced by Henry’s Juggling Equipment [8] and is given
40
in Apeendix B..
6.3.1
User Evaluation
This section will discuss the answers users gave to each of the questions in the questionnaire.
Questions 1 and 2
Figure 6.5: Question 1 and 2 Responses
Questions 1 and 2 were included in order to guage if the users participating in the evaluation process had any
experience juggling. The answers given show that just over 50% of participants had never attempted to juggle
before. This resulted in these participants having to read over the tutorial on how to juggle with two balls from
Appendix B. User’s were told that for evaluation, juggling with 3 balls would not be necessary. User’s were
invitied to attempt juggling with 3 balls after the evaluation tasks were completed.
It would have improved evaluation if an expert juggler could have evaluated the system. Although no participant answered ”Lots” for question 2, the general standard of juggling was reasonably high (for the users who
said they had juggled before). This may have been a result of running the Evaluation using only two balls. Two
balls were chosen as any more balls being juggled may prove a little tricky for the more novice jugglers in the
evaluation.
Question 3
Question 3 was used to try and evaluate how difficult users found the act of juggling in front of the system. This
question was asking how difficult they found it juggling in front of the system, not how difficult they found the
task of juggling itself.
Some users did express concerns about the starting position that they had to adopt being slightly awkward.
However, overall this wasn’t a massive concern and most users found the actual task of juggling in front of the
system to be comfortable enough.
41
Question 4
Question 4 was asked to see how users felt about the performance issues when this application was running on
the lab machines. The lab machines struggled to process the information from the Kinect Sensor (even using a
Framerate as low as 15 frames per second). Whilst this performance issue did not affect the users juggling, the
reports gathered on the slower lab machines tended to have many more inferred Events (as the machines dropped
more frames).
Question 5
Question 5 asked the users if they found the report features useful. Many users seemed to enjoy viewing a report
and being able to see exactly what the system had managed to record about their juggling session. Some users
expressed concerns about the amount of information on the report screen (particularly the hand positions as these
are always detected once per frame), and most users were quick to turn them off from the report visualisation so
they could explore the other information on display.
Questions 6, 7 and 8
Questions 6 and 7 were realted to the User interface and how easily the users were able to interact with the
system. Most users found the menu system easy to use and understand, and agreed that the system was user
friendly. However, one user in particular was disappointed that the program did not allow them to use the mouse
to select menu options.
Question 8 was an open ended question that gave participants an oppourtunity to suggest what things could
be improved in the User Interface. Some of the participants that evaluated the system on the slower machines
suggested that there could be an option menu item for performance based criteria. This option would allow users
to alter the frame rate (and potentially other properties) of the system so that they would have control of running
it on slower machines (or faster).
A few participants also expressed concerns about changing the values in the options menu (and report screen).
The participants did not like having to press the return key repeatedly in order to change an potion to the value
they desired. The general opinion amonst participants was that it was not an issue when the value ranges were
small, but when the values ranges were particularly large, repeated pressing of the return key was annoying.
Another issue raised with the user Interface was that to change the numbers of balls that the system should
recognise, the participants had to return all the way to the main menu and change the value in the options menu.
A few users suggested it would be more user friendly for the system if such a change could be made from the
main game screen before the user starts juggling.
Questions 9 and 10
Question 9 and 10 gathered information from the users with regards to Task 2 of the evaluation. As the mini
game did not output any information, the participants results from playing the mini game were collected in the
questionnaire. Nearly all of the user’s had more than one attempt at the mini game. The two users that only
received one star were two of the users who were using the lab machines to do the evaluation, this was because
the lab machines struggled to detect many PeakEvents, which are the Events which gain the user points in the
mini game.
42
Figure 6.6: Question 9 and 10 Reponses
Question 10 was used to discover if the participants knew about the lesson the mini game was trying to teach
before they took part in the evaluation, the majority (over 80%) did not know that it would be beneficial to make
each ball peak at the same height each time. The two people that did know about this were the two people who
achieved the highest star rating in the previous question.
Questions 11 and 12
Questions 11 and 12 were used to gather participants opinions on whether or not they found the mini games
useful and fun. Some participants appear to be a bit sceptical about the mini games usefulness, however the
majority agree that it was. The data gathered with regards to the improvement in the user juggling is discussed in
the next section. Two of the participants expressed concerns that the mini game isn’t particularly necessary (they
could practise this skill without the program), however they did suggest that they could understand its inclucsion
in the system.
The vast majority of participants agreed that the mini game was fun. Quite a lot of the participants enquired
about the highest score achieved to date was. Including a scoreboard feature for the mini games would be
an easily acheivable future development, however when the system is used repeatedly by a single user, the
scoreboard feature would not be as useful as it would be when multiple users are using the same system.
Question 13
Question 13 asked the participants if they thought they would be able to improve their juggling with the system.
Only one person thought they would not be able to improve their juggling with the system, stating that they
would rather there were more advanced patterns to be able to practise (the participant felt that they were already
able to juggle well with three balls). If there were more advanced patterns in the system, the participant felt that
they would be able to use the system.
Question 14
Question 14 was an open ended question asking participants to suggest things they felt needed to be done to
improve the overall system. Some of the answers to this question involved some minor issues, but there were
some more interesting things highlighted as well.
43
Some participants suggested that the voice controlled elements of the main menu should be improved. The
voice recognition software did struggle a bit in noisy surroundings, and participants suggested giving users
another way to start the session other then voice would be beneficial. This might perhaps be tricky, as when the
user is about to juggle, they will not be able to push buttons on a keyboard or mouse, instead some sort of method
by which the first Event seen triggers the start of recording may be a better idea. The starting position that the
user must adopt also raised concerns, on occasion the system struggled to recognise this posture, so this could
be improved as well. Participants also suggested another way to stop the recording other than standing still for a
few seconds would be a worthwhile feature to add.
Some participants felt the system would be better with the addition of new patterns and mini games (participants seemed to enjoy the idea of the juggling mini games). Participants also felt that the report showed a bit too
much information at the start, they felt that perhspa some of the information should be disabled to begin with,
and they could turn them on if they so wish.
An interesting feature that two participants thought would be beneficial would be real time advice. The
advantages of giving user’s advice in real time would be that they could try and adjust their technique as they are
juggling instead of retrospectively. Unfortunately, the system already does a large amount of processing to detect
the Events, coupling this with the pattern matching and inferring events in real time (these task are currently done
after the user has finished juggling) would perhaps be a bit too much for the system. Whilst this feature would
be a good addition, it mmight perhaps take a good amount of development to realise it.
6.3.2
Mini Game Evaluation
As well was the questionnaire to gather participants opinions on the JugEm system, data was collected for the
users first attempt at juggling with the system and then with their second attempt at juggling (after they had
played the mini game). This data was gathered in order to see if playing in the mini game (and indeed learning
about maintaining peak heights) was a worthwhile task for users to undertake. The results of this data collection
are given in this section.
The graphs in this section show the standard deviation of the position of the participants peaks (using an x
and y Cartesian coordinate system). Each graph shows the standard deviation for the values before the mini game
(in blue) and the standard deviation of the values after the mini game(in red). There are two graphs, one for the
deviation on the horizontal axis and one for the deviaintion on the vertical axis. The mini games purpose it teach
teach the user to maintain their vertical positions, however the horizontal data was also collected. Whilst reading
this section it is important to note that a peak is considered to be the moment when a ball as at its highest point
(on its trajectory), and is the moment when its vertical direction changes from up to down.
6.3.3
Horizontal Deviation
Figure 6.7 shows the deviation of the X-Values before and after participants played the mini game during evaluation. Each of the 11 pariticpants are show in the graph (each participant has a marked point on both of the lines
on the graph).
The mini game attempts to teach participants to keep their vertical peak heights (not their horizontal peak
position) at the same position every time, however 6 participants managed to improve their hortizontal devaiation
also.
Ideally, the red line should always be above the blue line, this would show that the mini game had a positive
effect on the users horizontal variance.
44
Figure 6.7: Deviation of X Axis values (before/after)
6.3.4
Vertical Deviation
Figure 6.8 shows the deviation of the Y-Values before and after participants played the mini game during evaluation. Each of the 11 pariticpants are show in the graph (each participant has a marked point on both of the lines
on the graph).
The Vertical Deviation shows much more positive results. 9 of the participants Y-Values devaited much less
after they had completed the mini game. Some participants showed very dramatic improvements after playing
the mini game (particularly participants 2 and 5).
The average variance in the standard deviation of the Vertical positions is 9.38. This suggests that overall the
mini game has improved participants juggling technique.
Figure 6.8: Deviation of Y Axis values (before/after)
45
6.3.5
Further Analysis
The data gathered by the Evaluation could be evaluated further by carrying out more statistical tests. Various
statisitcal tests exist to calculate whether or not these values has statistical signifinance (such as the F or T
tests). These tests were not carried out due to time constraints, but would be an excellent method of verifying
(statistically) that playing the min game has improved the participants juggling technique.
The analysis carried out above also treats the X and Y values independant variables (which technically
speaking they are not), it would be beneficial to use an analysis method to use 2 dimensional Covariance instead
of treating these variables independantly.
46
Chapter 7
Conclusion
7.1
Project Summary
7.2
Future Work
7.3
Personal Reflection
47
Appendices
48
Appendix A
Evaluation Tasks
49
JugEm Evaluation
Before attempting the Tasks given below, please complete Questions 1 and 2 of the
Questionnaire.
Task 1
If you never juggled before, please consult the brief tutorial on how to juggle two
balls.
Double click on the icon to start the program.
You should be presented with the
screen below.
Navigate to the Options menu and set the number of balls is to 2.
Return to the Main Menu and select Play Game.
Once the game screen has loaded, you will be presented with an indicator to help
get you in the right posture, ready to juggle. Red means that you are standing in
an incorrect posture, Yellow means that the system is waiting on you to give the
voice command of “Go” before it will start recording you juggle. Once you have
given the voice command, the indicator will turn green and the system will be
recording you juggling.
Before you give the voice command, have a go throwing the balls to ensure the
sensor can see your hands and the balls being thrown.
They will appear black on
the screen.
When you are in the correct position and ready to record your session, give the
voice command “Go” and juggle the two balls for as long as you can.
To stop
recording your session, stand still for a few seconds or give the voice command
“Stop”.
When the system is finished processing the events it has gathered, you will be
presented with the report screen which displays all the information the system has
gathered. If your score is not 100%, you will be given a popup with advice on how
you could have improved your juggle.
Have a go changing the display options for the report to have a look at all the
information that was gathered during your session.
When you are ready, select the back option to return to the main menu.
Task 2
This task will give you a chance to play one of the two mini games in the system.
When juggling, it is greatly beneficial to throws each of the balls to the same
height. Throwing each ball to the same height helps to keep the timing of your
throws even, making your juggle much more fluent and easy on the eye.
When you are ready select the Training menu and then Peak Practise from the
menu screens.
The game screen will load again, and you will be presented with a similar screen to
the one from Task 1.
However, the objective this time is different.
ball, you have 60 seconds to score as many points as you can.
Using only one
Points are
awarded for keeping your throws to the same height as many times as you can.
The more consecutive throws you get in a row, the more points you will get.
If you play the game with the graphics displayed, then the game will show a target
for you to hit, you get a different number of points for hitting each zone in the
target. The graphics can be distracting though, it is better to look at the balls you
are throwing and not the screen. The scoring works the same if you choose not
display these graphics
When you are ready and in the correct posture, give the voice command to start
the game.
At the end of the 60 seconds, you will be given a breakdown of your
score and awarded a star rating out of four.
Have as many goes as you wish
before proceeding to the next part of this task.
The next part of this Task involves repeating Task 1 with two balls again.
Only this
time try and make sure your throws heights are more regular (similar to what you
have just done in the mini game).
The peak positions from your report in Task 1 will be compared to the peak
positions in this Task.
Return to the Main Menu and select Play Game again.
Have a go with two balls
trying to maintain your throw height and see if your score has improved!
When you are finished, please finish by filling out the remaining questions in the
questionnaire.
Thanks!
Appendix B
Juggling Tutorial
53
An Introduction In Juggling With Balls
Keep your Forearms in a horizontal
posture;
the upper arm and forearm
approximately forming a right angle;
Bring your upper arms slightly forward
and keep your elbows quite close to
your body.
Exercising With One Ball
Take one ball in your right hand, the palm facing
upwards. Now throw the ball to the left hand in an
arc-shaped curve, and catch it in your left. The
vertex of the ball's trajectory should not be more
than 4 inches (10cm) above, and slightly to the left
of your forehead. The ball receives its momentum
from the forearm. The hand slightly supports the
throwing motion by moving 2-3 inches towards the
upper left. The ball is thus thrown off nearly at the
centre of your body.
The catching hand should be drawn 2-3 inches
towards the arriving ball. Subsequently, throw the
ball back in the same manner from left to right.
Exercising With Two Balls
This time, take one ball in your right
and left hand respectively. Now throw
the right-hand ball in an arc-shaped
curve towards your left hand. When the
ball has reached the vertex of its
trajectory, throw the other ball from left
to right.
Moreover, pay attention to throwing the
second ball at the exact instant (i.e.
when the first one is at its vertex).
Otherwise your rhythm and timing will
turn out to be incorrect and you will
most probably have difficulty in future in
when juggling with 3 balls.
Juggling With Three Balls
Juggling with three balls at first requires that you
take two ballsin one hand: one of the balls rests on
the index and the middle finger, the other one in
the palm.
Note: For the sake of simplicity, take 2 balls in your
right hand if you are right-handed, or take 2 balls in
your left hand if you are left-handed.
Start throwing with the hand that holds 2 balls. The
ball resting on the fingers is thrown first. In order to
avoid throwing two balls at a time, hold on to the
ball on the palm with your ring finger and little
finger.
In the first step, just catch the first ball you threw
with your other hand and repeat this several times
in order to get the feeling for the throw and the
catch of the third ball.
Throw the white ball to your left hand.
When it has reached its vertex, throw
the black ball. When the black ball again
has reached its vertex, throw the yellow
ball. Meanwhile, your left hand has
caught the white ball.
This is the basic pattern for juggling
with 3 balls.
Looking at the trajectories of the balls and the
movements of your hand, you will see a horizontal
figure-of-eight. Should you have any difficulty in
finding the correct rhythm, just throw the balls
without catching them at first. The white ball and the
yellow ball should than be on the floor next to your
left foot. The time interval between the landing of
the individual balls should be constant. You will
easily hear it by the plopping sound of the balls as
they land.
Now have fun practising!!!!!
Appendix C
Evaluation Questionnaire
56
1
JugEm Evaluation Questionnaire
For each statement, please show the extent of your agreement or disagreement by ticking the appropriate circle.
Juggling Experience
1. Have you ever tried to juggle before?
# Yes
2. How much juggling experience do you have?
# None
# A little
# Some
# No
# Quite a bit
# Lots
JugEm System
3. I was comfortable juggling in front of the system (i.e it wasn’t awkward).
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
4. The system seemed to perform poorly as I was juggling in front of it.
5. The report and its features were useful.
User Interface
6. The system is user friendly.
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
7. The menu system was difficult to use.
8. What improvements do you think could be made to the User Interface?
Mini Games
9. What was your highest star rating during the mini game?
#1
#2
#3
#4
10. Were you previously aware of what the mini game was trying to let you practise?
# Yes
# No
11. The mini game was useful.
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
12. The mini game was fun.
2
Final Questions
13. I could use this system to improve my juggling.
# Strongly Agree
# Agree
# Neutral
# Disagree
# Strongly disagree
14. What things do you think could be done to improve the overall application (bugs, additional
features etc)?
Appendix D
Questionnaire Responses
This sections shows the responses that participants gave to the Evaluation Questionnaire.
Figure D.1: Question 1 and 2 Responses
Figure D.2: Question 3 Reponses
59
Figure D.3: Question 4 Reponses
Figure D.4: Question 5 Reponses
Figure D.5: Question 6 Reponses
60
Figure D.6: Question 7 Reponses
Figure D.7: Question 9 and 10 Reponses
Figure D.8: Questions 11 Reponses
61
Figure D.9: Questions 12 Reponses
Figure D.10: Question 13 Reponses
62
Appendix E
Mini Game Evaluation Data
This chapter will graphically show the data sets gathered during the Evaluation tasks.
Participant 1
Figure E.1: Participant 1 Results
Participant 1’s trend showed no notable change. However after completing Task 2 their peak horizontal
variance did become smaller.
Participant 2
Figure E.2: Participant 2 Results
63
Participant 2 showed a good improvement in the vertical height’s of their ball throwing. The data set gathered
for Task 2 does not contain many points, however even from the points gathered it can be seen that there has been
an improvement in the vertical height of the throws.
Participant 3
Figure E.3: Participant 3 Results
Participant 3 showed a far more consistent peak height and also managed to reduce the horizontal variance
in the peaks.
64
Participant 4
Figure E.4: Participant 4 Results
Participant 4’s data set before Task 1 was one of the better ones (even befor eplaying the mini game). They
still show a more consistent height varianc eafter completing the mini game in task 2.
Participant 5
Figure E.5: Participant 5 Results
Participant 5’s first data set did not gather too many events and as a result no real comments can be made
about their perceived improvement after the mini game.
Participant 6
Figure E.6: Participant 6 Results
65
Participant 6 showed a small improvement in peak height consistency, however their horizontal variance did
increase.
Participant 7
Figure E.7: Participant 7 Results
Participant 7 was one of the more acomplished jugglers that used the system, using the mini game seemed to
do very little to their already impressive peak position consistency.
Participant 8
Figure E.8: Participant 8 Results
Participant 8’s first data set was quite poor. Despite their good performance after the mini game, no real
comments can be made as to the effect of the mini game.
Participant 9
Participant 9 was another consistent juggler who evaluated the system. The mini game did little to improve their
juggling technique.
66
Figure E.9: Participant 9 Results
Participant 10
Participant 10 seemed to show very little improvement what so ever. The gradient of their Linear trend has appeared to decrease slighty in the diagrams, but overall there is little improvement that would suggest playing the
mini game improved their technique.
Figure E.10: Participant 10 Results
Participant 11
Figure E.11: Participant 11 Results
Participant 11’s peak positions do appear more consistent after playing the mini game. Their data set from
the first Task does contain less Events that their data set for Task 2 however.
67
Bibliography
[1] Bell
Labs
Advances
Intelligent
Networks.
http://www3.alcatellucent.com/wps/portal/!ut/p/kcxml/04 Sj9SPykssy0xPLMnMz0vM0Y QjzKLd4w39w3RL8h2VAQAGOJBYA!!?LMS
Feature Detail 000025.
[2] Eric W. Aboaf. Task-level robot learning: juggling a tennis ball more accurately. Robotics and Automation,
3:1290–1295, 1989.
[3] Rob Abram. Wildcat Juggling Tutorials. http://wildcatjugglers.com/tutorial/category/index.html.
[4] Peter J Beek and Arthur Lewbel. Science of Juggling. https://www2.bc.edu/ ˜lewbel/jugweb/sciamjug.pdf.
[5] J Canny. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell., 8(6):679–
698, June 1986.
[6] Google Tech Talks David Sachs. Sensor Fusion on Android Devices: A Revolution in Motion Processing
(approx 23:30). http://www.youtube.com/watch?v=C7JQ7Rpwn2k.
[7] Bill Donahue.
Jugglers Now Juggle Numbers to Compute New Tricks for Ancient Art.
http://www.nytimes.com/1996/04/16/science/jugglers-now-juggle-numbers-to-compute-new-tricksfor-ancient-art.html.
[8] Henrys Juggling Equipment.
An Introduction In Juggling With Balls.
online.de/Downloads/IntroJugg.pdf.
http://www.henrys-
[9] Billy Gillen. Remember the Force Hassan! http://www.jstor.org/stable/10.2307/1412713.
[10] Leah
Goodman.
Using
quantum
numbers
to
describe
juggling
https://play.google.com/store/apps/details?id=com.dreamstep.wLearntoJuggleClubs&hl=en.
patterns.
[11] Google. Android Sensors Overview. http://developer.android.com/guide/topics/sensors/sensors overview.html.
[12] D.F. Hayes, T. Shubin, G.L. Alexanderson, and P. Ross. Mathematical Adventures For Students and Amateurs. Spectrum Series. Math. Assoc. of America, 2004.
[13] Intel. Open CV Library. http://opencv.willowgarage.com/wiki/.
[14] Paul Klimek. Using quantum numbers to describe juggling patterns. http://www.quantumjuggling.com/.
[15] Denis Mottet Marina Vela Nuez, Carlo Avizzano and Massimo Bergamasco. A cost-effective sensor system
to train light weight juggling using an interactive virtual reality interface. BIO Web of Conferences, The
International Conference SKILLS 2011, 1:6, 2011.
[16] S. McKechnie. Popular Entertainments Through the Ages. Frederick A. Stokes Company, 1931.
[17] Microsoft.
Kinect for Windows Software Development Kit.
us/kinectforwindows/develop/developer-downloads.aspx.
68
http://www.microsoft.com/en-
[18] Microsoft Developer Network.
Kinect for Windows Sensor Components and Specifications.
http://msdn.microsoft.com/en-us/library/jj131033.aspx.
[19] Microsoft Developer Network.
us/library/jj663803.aspx#SDK 1pt6 M2.
[20] Microsoft Developer Network.
us/library/jj883692.aspx.
Kinect
SDK
v1.6.
MapDepthPointToColorPoint Method.
[21] Microsoft Developer Network.
.NET
gb/library/system.diagnostics.stopwatch.aspx.
Stopwatch
Class.
[22] Microsoft Developers Network.
Microsoft Speech Platform.
us/library/hh361572(v=office.14).aspx.
http://msdn.microsoft.com/enhttp://msdn.microsoft.com/enhttp://msdn.microsoft.com/enhttp://msdn.microsoft.com/en-
[23] Per Nielsen. Juggling Festivals From Scandinavia. http://www.juggle.org/history/archives/jugmags/383/38-3,p28.htm.
[24] C.E. Shannon, N.J.A. Sloane, A.D. Wyner, and IEEE Information Theory Society. Claude Elwood Shannon: collected papers. IEEE Press, 1993.
[25] Platty Soft. Juggle Droid Lite. https://play.google.com/store/apps/details?id=net.shalafi.android.juggle&hl=en.
[26] Edgar James Swift. Studies in the Psychology and Physiology of Learning. The American Journal of
Psychology, 14:201–224, 1903.
[27] Trevor Taylor. Kinect for Robotics. http://blogs.msdn.com/b/msroboticsstudio/archive/2011/11/29/kinectfor-robotics.aspx.
[28] TWJC. Tunbridge Wells Juggling Club Tutorials. http://www.twjc.co.uk/tutorials.html.
[29] Walaber. WiiSticks Project. http://walaber.com/wordpress/?p=72.
69