fulltext

ACTA UNIVERSITATIS UPSALIENSIS
Uppsala Dissertations from the Faculty of Science and Technology
117
Exploring the Universe
Using Neutrinos
A Search for Point Sources in the Southern Hemisphere
Using the IceCube Neutrino Observatory
Rickard Ström
Dissertation presented at Uppsala University to be publicly examined in
Ångströmlaboratoriet, Polhemsalen, Lägerhyddsvägen 1, Uppsala, Friday, 18 December
2015 at 13:15 for the degree of Doctor of Philosophy. The examination will be conducted
in English. Faculty examiner: Professor Lawrence R. Sulak (Boston University, Boston,
USA).
Abstract
Ström, R. 2015. Exploring the Universe Using Neutrinos. A Search for Point Sources in the
Southern Hemisphere Using the IceCube Neutrino Observatory. Uppsala Dissertations from
the Faculty of Science and Technology 117. 254 pp. Uppsala: Acta Universitatis Upsaliensis.
ISBN 978-91-554-9405-6.
Neutrinos are the ideal cosmic messengers, and can be used to explore the most powerful
accelerators in the Universe, in particular the mechanisms for producing and accelerating cosmic
rays to incredible energies. By studying clustering of neutrino candidate events in the IceCube
detector we can discover sites of hadronic acceleration. We present results on searches for pointlike sources of astrophysical neutrinos located in the Southern hemisphere, at energies between
100 GeV and a few TeV. The data were collected during the first year of the completed 86string detector, corresponding to a detector livetime of 329 days. The event selection focuses
on identifying events starting inside the instrumented volume, utilizing several advanced veto
techniques, successfully reducing the large background of atmospheric muons. An unbinned
maximum likelihood method is used to search for clustering of neutrino-like events. We perform
a search in the full Southern hemisphere and a dedicated search using a catalog of 96 predefined known gamma-ray emitting sources seen in ground-based telescopes. No evidence of
neutrino emission from point-like sources is found. The hottest spot is located at R.A. 305.2°
and Dec. -8.5°, with a post-trial p-value of 88.1%. The most significant source in the a priori
list is QSO 2022-077 with a post-trial p-value of 14.8%. In the absence of evidence for a signal,
we calculate upper limits on the flux of muon-neutrinos for a range of spectra. For an unbroken
E-2 neutrino spectrum, the observed limits are between 2.8 and 9.4×10-10 TeV cm-2 s-1, while for
an E-2 neutrino spectrum with an exponential cut-off at 10 TeV, the observed limits are between
0.6 and 3.6×10-9 TeV cm-2 s-1.
Keywords: astroparticle physics, neutrino sources, neutrino telescopes, IceCube
Rickard Ström, Department of Physics and Astronomy, High Energy Physics, Box 516,
Uppsala University, SE-751 20 Uppsala, Sweden.
© Rickard Ström 2015
ISSN 1104-2516
ISBN 978-91-554-9405-6
urn:nbn:se:uu:diva-265522 (http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-265522)
“The probability of success is difficult to estimate but
if we never search the chance of success is zero.”
Cocconi and Morrison [1]
Table of Contents
Page
Acknowledgments
About this Thesis
..............................................................................................
.............................................................................................
Abbreviations and Acronyms
..........................................................................
9
13
17
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.1 Physics Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.2
The Multi-Messenger Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2
The Standard Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
Matter Particles and Force Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Interaction Strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3
Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4
The Weak Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5
The Higgs Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6
Fermion masses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7
The Parameters of the Standard Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8
Beyond the Standard Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
30
32
33
35
37
38
39
40
3
The Cosmic Ray Puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1
Energy Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2
The Origin of High-Energy Cosmic Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Astrophysical Neutrinos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4
Atmospheric Backgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5
Acceleration Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.6
Potential Acceleration Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
47
50
53
55
60
63
67
4
Neutrino Detection Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1
Neutrino Cross-Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2
Cherenkov Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3
Energy Loss Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4
Event Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
76
78
80
82
5
The IceCube Neutrino Observatory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1
The IceCube In-Ice Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2
Optical Properties of the Ice at the South Pole . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3
The Digital Optical Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
88
90
93
5.4
5.5
Data Acquisition System and Triggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Processing and Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6
Event Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.1
The IceCube Simulation Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2
Neutrino Event Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7
Reconstruction Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.1
Noise Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2
Particle Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.3
Angular Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4
Interaction Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.5
Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
Opening Up a Neutrino Window to the Southern Hemisphere . . . . . . . . . . . . 129
8.1
The FSS Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
9
Point Source Analysis Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
9.1
Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
9.2
Likelihood and Test Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
107
107
108
118
121
124
10 A Search for Low-Energy Starting Events from the Southern
Hemisphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.1 Analysis Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 Experimental Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.3 Simulated Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.4 Event Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.5 Event Selection Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.6 Likelihood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.8 Systematic Uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
147
149
151
154
154
189
192
201
209
11 Summary and Outlook
............................................................................
213
.....................................................................................
217
Summary in Swedish
Appendix A: BDT Input Variables
...............................................................
223
............................................................................
227
......................................................................................................
235
Appendix B: Result Tables
References
Acknowledgements
During my time as a graduate student at Uppsala University I have had the
privilege to work with many dedicated, helpful and experienced scientists.
We’ve organized meetings and workshops, had lengthy discussions on peculiar features of fundamental physics, and had a lot of fun during lunches and
fika breaks, discussing everything from foreign and domestic politics to candy
wrapping and mushroom picking. I’ve enjoyed the company of each one of
you, thank you for this great time. Some of you deserve a special mention.
First of all I would like to thank my supervisors Allan Hallgren and Olga
Botner. Few graduate students have had the privilege to work close to two
such excellent and curious physicists, always available to answer questions no
matter the scope. Allan was the main supervisor of my thesis and introduced
me to the physics potential of the Southern hemisphere at energies below 100
TeV. Together we developed a new data stream for collecting such low-energy
events. After the discovery in 2013 of the first ever high-energy astrophysical neutrinos, the interest in this channel has grown dramatically. Allan has a
never-ending stream of ideas and taught me how to be daring and bold in my
creativity. He also emphasized the need to stay critical to accepted knowledge.
How he can successfully calculate and compare areas of histograms with differences as small as 1% solely by judging the shape of lines on a plot located
3 meters away, I will never know. That remains a true mystery to me.
Olga helped me organize my work and to prioritize. Olga has the rare gift
of making people think in new ways by simply listening and asking the right
questions. We’ve had countless discussions about everything from V-A couplings, the Higgs mechanism, and the nature of astrophysical objects. Thank
you for taking time to discuss these things with me over the years, although
buried in the work that comes by being the spokesperson of IceCube and a
member of the Nobel committee.
I’d like to extend my thanks also to my co-supervisor Chad Finley. Thank
you for taking an interest in my analysis and for all the discussions we’ve had
about OneWeight, likelihood algorithms, and hypothesis testing. I would also
like to thank Carlos Perez de los Heros. We’ve discussed many topics, ranging
from dark matter to statistical methods and art. Whenever you knocked on my
door, I knew I was going to learn something new and unexpected.
Henric Taavola, we’ve had as fun as I think two people can have. I want
to thank you for five fantastic years, and especially that you put up with me
singing, dancing, talking politics and doing white-board science in a more
than often weird mix and high pace. I will forever remember those days we
9
pretended it was Christmas although June, the days we walked to the gas station to buy falafel because we both forgot to bring lunch, the days we took a
walk in the beautiful forest nearby, and the ‘glee’ days where we limited all
communication to be done through musical numbers only. All of those days
have a special place in my heart and so do you!
I would like to send a big thanks the people in the IceCube groups in Uppsala and Stockholm, past and present: Olga, Allan, Carlos, Sebastian, Henric, Lisa, Alexander, Christoph, Leif, Olle, David B., Jonathan M., Chad ,
Christian, Klas, Jon, Samuel, Martin, Marcel, Matthias, Maryon, and the late
Per-Olof. Thank you all for being encouraging and helpful.
Per-Olof was an inspiration to all scientists and will continue to be so. I
remember him as a very determined and intelligent man with a huge pathos
and an open heart. He recommended me for the service work I did on the
South Pole and Olga and Allan kindly approved. That was an amazing trip
and an absolute highlight of my time as a graduate student. Thank you John,
Steve, Larissa, Ben R., Felipe, Blaise, Jonathan D., and Hagar for making this
trip unforgettable.
I also extend my thanks to everyone in the IceCube Collaboration, in particular to those whom I have worked with. A special thanks goes to Antonia, Zig,
Carlos, Kim, Megan, and Olivia. Thank you for making me feel like home in
Madison. Thanks also: Kai, Volker, Donglian, Mårten, Mike R., David A.,
Martin B., Matt, Elisa R., Naoko, Stefan, Jake F. , Jake D., John F., Robert,
Claudio, Jason, Mike L., Jan B., Jan K., Ryan, Frank, Jim, Laurel, Sandy,
Anna P., Anna B., Sarah, Tania, James, Elisa P., Katherine, Joakim, Michael
S., Jakob, Klaus, Ben W., Kurt, Moriah, Gwen, Geraldina, Martin C., and
Nancy. You are all great people, and I hope I will have the change to work
with all of you again someday.
Stefan Leupold, you’re an excellent teacher that takes all questions seriously and makes the students feel smart. We have had so many interesting
discussions, especially about the theoretical aspects of the Standard Model.
Thank you Inger and Annica for providing an escape from physics at times
needed and for always being helpful.
I would like to thank my mother Maria, you are extraordinary in so many
ways that I can’t even begin to describe. You are supportive and believe I can
do and achieve anything. Thank you also to my sister Linda, to Bazire, my
father Roland, and my late grandparents Evert and Edit. I know you are all
very proud of me and I can honestly say that I would not have been able to do
this without your support.
A big thanks to all my friends: Katarina, Laura, Daniel, Sanna, Mikael,
Noel, Anna-Karin, Andreas, Jessica, Maja, Lena, Li, Jim, Tove, Filip, and
Anette. A special thanks to Katarina with whom I started the science podcast
Professor Magenta. You are exceptionally smart and fun to hang around, and
you bring out the best of me. I’m looking forward to much more of all of that
in future episodes of both the podcast and my life.
10
My biggest thanks goes to Tony. Thank you for being amazing, for cooking,
cleaning, and arranging my life outside of physics. Your encouragement and
support means a lot. You are smart and brilliant and I have become a better
person because of you.
Thank you also to the Lennanders foundation for awarding me one of your
scholarships enabling me to finish this project, and to all of you who helped
me finish this thesis by providing constructive comments and suggestions on
how to improve the text: Olga, Allan, Sebastian, David, Stefan, and Henric.
The IceCube Collaboration acknowledge the support from the following
agencies: U.S. National Science Foundation Office of Polar Programs, U.S.
National Science Foundation Physics Division, University of Wisconsin Alumni Research Foundation, the Grid Laboratory Of Wisconsin (GLOW) grid
infrastructure at the University of Wisconsin - Madison, the Open Science
Grid (OSG) grid infrastructure; U.S. Department of Energy, and National
Energy Research Scientific Computing Center, the Louisiana Optical Network Initiative (LONI) grid computing resources; Natural Sciences and Engineering Research Council of Canada, WestGrid and Compute/Calcul Canada;
Swedish Research Council, Swedish Polar Research Secretariat, Swedish National Infrastructure for Computing (SNIC), and Knut and Alice Wallenberg
Foundation, Sweden; German Ministry for Education and Research (BMBF),
Deutsche Forschungsgemeinschaft (DFG), Helmholtz Alliance for Astroparticle Physics (HAP), Research Department of Plasmas with Complex Interactions (Bochum), Germany; Fund for Scientific Research (FNRS-FWO), FWO
Odysseus programme, Flanders Institute to encourage scientific and technological research in industry (IWT), Belgian Federal Science Policy Office
(Belspo); University of Oxford, United Kingdom; Marsden Fund, New Zealand; Australian Research Council; Japan Society for Promotion of Science
(JSPS); the Swiss National Science Foundation (SNSF), Switzerland; National
Research Foundation of Korea (NRF); Danish National Research Foundation,
Denmark (DNRF).
Rickard Ström,
Uppsala, November 2015
11
About this Thesis
In this thesis we present an analysis using data from the IceCube Neutrino
Observatory to study the neutrino flux from the Southern hemisphere at energies between 100 GeV and a few TeV. In particular, we search for neutrino
point sources as indicators for sites of hadronic acceleration where some of
the most energetic particles in the Universe are thought to be produced and/or
accelerated.
The thesis is divided into the following chapters: Chapter 1 is an introduction to high-energy neutrino astrophysics. It also serves as the physics
motivation for the analysis. In chapter 2 we discuss the current framework of
particle physics, the Standard Model. We focus on the properties of the weak
interaction, the only known force (except gravity) that interacts with neutrinos. In chapter 3 we introduce the so-called cosmic ray puzzle and discuss
neutrino production in astrophysical sources as well as in the Earth’s atmosphere. Chapter 4 deals with the interaction and energy loss of neutrinos and
also introduces the detection principle and the different event topologies seen
in IceCube. The IceCube detector, including its optical sensors and data acquisition system is discussed in chapter 5. In chapter 6 we describe the event
simulations used to produce the signal and background samples that we use in
the analysis. In chapter 7 we go through the noise cleaning algorithms applied
to data and all reconstruction techniques used to characterize each observed
event in terms of e.g. direction and energy. Chapter 8 defines the online filter
used at the South Pole to select the neutrino candidate events that constitute
the foundation of the analysis presented in chapter 10. The likelihood analysis method is introduced in chapter 9 and conclusions and discussions of the
results are presented in chapter 11.
The Author’s Contribution
Inspired by my work as a Master student in the Fermi collaboration, I started
off by searching for dark matter from the Galactic center region, but later
switched to a search for point sources in the whole Southern hemisphere.
These searches both focus on lower energies, where traditional IceCube methods cannot be applied, in particular the southern sky. I developed a new data
stream by constructing the so-called Full Sky Starting (FSS) filter described
in detail in chapter 8. A lot of time was spent on tests and verification of this
filter.
13
The IceCube crew at the Amundsen-Scott South Pole Station during my stay. From
left: Felipe Pedreros, Ben Riedel, Rickard Ström, Jonathan Davies, Steve Barnet,
Hagar Landsman, John Kelley, Blaise Kuo Tiong, and Larissa Paul.
I have participated in ten IceCube collaboration meetings, including one
in Uppsala where I also was a part of the organizing committee. Further, I
have had the privilege to participate in several summer and winter schools
discussing neutrino theory, phenomenology and experiments as well as dark
matter and high-energy physics in general. I also attended the Astroparticle
Physics Conference (ICRC) in 2015, where I presented the results of my analysis. The proceedings are pending publication in Proceedings of Science and
are attached in the end of this thesis.
Other work, not included in this thesis, where I have contributed during my
time as a graduate student include:
• IceCube service work. I travelled to the South Pole, Antarctica, in January 2013 for three weeks of service work on the in-ice detector. I
worked together with John Kelley to recover DOMs disconnected due
to various problems, such as communication errors, unstable rates, etc.
This was not only an opportunity to work hands-on with the detector system at the IceCube Lab but also to get an amazing insight in the detector
and the IceCube project as a whole.
14
• IceCube monitoring. In 2013 I did two weeks of monitoring of the
IceCube detector, and was later asked to develop educational material
for coming monitoring shifters as well as to educate them.
• Teaching assistant. During my time as a graduate student I have been a
teaching assistant on several courses including: particle physics, nuclear
physics, and quantum mechanics. Further I have taken a course in pedagogic training for academic teachers and worked together with several
other teaching assistants to improve the laboratory exercises in nuclear
physics.
• Outreach activities. I participated in several outreach activities and built
a permanent exhibition of IceCube at Ångströmlaboratoriet at Uppsala
University together with my colleague Henric Taavola. It contains an
event display and monitors showing pictures and a documentary of the
construction of the detector.
• I participated in ‘Gran Sasso Summer Institute 2014 - Hands-On Experimental Underground Physics at LNGS’ (Laboratori Nazionali del Gran
Sasso), near the town of L’Aquila, Italy, where I performed PMT investigations in the Borexino test facility, together with Osamu Takachio and
Oleg Smirnov. The goal was to characterize a large PMT that might be
used in several future neutrino experiments. The results were published
in Proceedings of Science with identification number PoS(GSSI14)016.
Units and Conventions
Throughout this thesis we will use eV (electronvolt) as the standard unit of energy, corresponding to about 1.602 · 10−19 J. It is defined as the energy gained
by an electron traversing an electrostatic potential difference of 1 V. Throughout the thesis we also use natural units, i.e., = c = kB = 1, where = h/2π, h
is Planck’s constant, c is the speed of light in vacuum, and kB is the Boltzmann
constant. This means that we can express particle masses in the same units as
energy through the relation E = mc2 = m. Effectively this can be thought of as
expressing the masses in units of eV/c2 . Throughout this thesis we will use
GeV = 109 eV, TeV = 1012 eV, PeV = 1015 eV, and EeV = 1018 eV. When
discussing astrophysical objects we sometimes use the erg unit (1 TeV = 1.6
erg).
Cover Illustration
The illustration on the cover of this thesis shows the evolution of the Universe
from a dense plasma to the structures we see today, such as galaxies, stars, and
planets. Credit: European Space Agency (ESA).
15
The author standing on the roof of the IceCube Lab (literally on top of the IceCube
detector) at the South Pole, Antarctica. Visible in the background to the left: the 10 m
South Pole Telescope (SPT) and the BICEP (Background Imaging of Cosmic Extragalactic Polarization) telescope. To the right, The Martin A. Pomerantz Observatory
(MAPO).
16
Abbreviations and Acronyms
AGILE
AGN
AHA
AMANDA
AMS
ANITA
ANTARES
ARA
ARIANNA
ASCA
ATLAS
ATWD
Astro-rivelatore Gamma a Immagini Leggero
Active Galactic Nuclei
Additionally Heterogeneous Absorption
The Antarctic Muon And Neutrino Detector
Array
Alpha Magnetic Spectrometer
Antarctic Impulse Transient Antenna
Astronomy with a Neutrino Telescope and
Abyss environmental Research project
Askaryan Radio Array
The Antarctic Ross Ice Shelf Antenna Neutrino
Array
The Advanced Satellite for Cosmology and Astrophysics
A Toroidal LHC ApparatuS
Analog Transient Waveform Digitizer
Baikal-GVD
BDT
BDUNT
Baikal Gigaton Volume Detector
Boosted Decision Tree
The Baikal Deep Underwater Neutrino Telescope
CC
CERN
Charged Current
The European Organization for Nuclear
Research (French: Organisation européenne
pour la recherche nucléaire)
The Chandra X-ray Observatory
The Cabibbo-Kobayashi-Maskawa Matrix
Confidence Level
Cosmic Microwave Background
The Compact Muon Solenoid
Center of Gravity
COsmic Ray SImulations for KAscade
Chandra
CKM
C.L.
CMB
CMS
COG
CORSIKA
17
CP
CPU
CR
CRT
CTEQ5
Charge Conjugation and Parity Violation
Central Processing Unit
Cosmic Ray
Classic RT-Cleaning
The Coordinated Theoretical Experimental
Project on QCD
DAQ
DIS
DOM
DOR
DSA
Data Acquisition System
Deep Inelastic Scattering
Digital Optical Module
DOM Readout
Diffusive Shock Acceleration
ESA
European Space Agency
fADC
FSRQ
FSS
fast Analog-to-Digital Converter
Flat-Spectrum Radio Quasar
Full Sky Starting
GC
GENIE
GPS
GPU
GRB
GUT
GZK
Galactic Center
Generates Events for Neutrino Interaction Experiments
Global Positioning System
Graphical Processing Unit
Gamma-Ray Burst
Grand Unified Theories
Greisen-Zatsepin-Kuzmin
HESE
HESS
HLC
HST
HV
High-Energy Starting Events
High Energy Stereoscopic System
Hard Local Coincidence
The Hubble Space Telescope
High Voltage
18
IC
IceCube
ICL
IR
ISM
ISS
Inverse Compton
The IceCube Neutrino Observatory
IceCube Lab
Infrared
The Interstellar Medium
International Space Station
Kamiokande
KM3NeT
KS
Kamioka Nucleon Decay Experiment
Cubic Kilometer Neutrino Telescope
Kolmogorov-Smirnov
LED
LEP
LESE
LHC
LMC
Light Emitting Diode
The Large Electron-Positron Collider
Low-Energy Starting Events
The Large Hadron Collider
Large Magellanic Cloud
MAGIC
MC
MESE
MPE
Mu2e
Major Atmospheric Gamma Imaging Cherenkov Telescopes
Monte Carlo
Medium-Energy Starting Events
Multi PhotoElectron
The Muon-to-Electron experiment
NC
NH
NuGen
Neutral Current
Northern Hemisphere
Neutrino-Generator
PAMELA
Payload for Antimatter Matter Exploration and
Light-nuclei Astrophysics
The Pierre Auger Observatory
Probability Density Function
PAO
PDF
19
PTP
PWN
photoelectron
The Precision IceCube Next Generation Upgrade
The Pontecorvo-Maki-Nakagawa-Sakata Matrix
Photo Multiplier Tube
Photon Propagation Code
Preliminary Reference Earth Model
PRopagator with Optimal Precision and Optimized Speed for All Leptons
Punch Through Point
Pulsar Wind Nebula
QCD
QED
QFT
Quantum Chromodynamics
Quantum Electrodynamics
Quantum Field Theory
R.A.
RMS
Right Ascension
Root Mean Square
SED
SH
SLC
SM
SMT
SNO
SNR
SPATS
SPE
SPICE
SRT
SSC
STeVE
Spectral Energy Distribution
Southern Hemisphere
Soft Local Coincidence
The Standard Model
Simple Majority Trigger
The Sudbury Neutrino Observatory
Supernova Remnant
South Pole Acoustic Test Setup
Single PhotoElectron
South Pole ICE
Seeded RT-Cleaning
Self-Synchrotron Compton
Starting TeV Events
p.e.
PINGU
PMNS
PMT
PPC
PREM
PROPOSAL
20
SuperDST
Super-K
SUSY
Super Data Storage and Transmission
Super-Kamiokande
SUperSYmmetry
TWC
Time Window Cleaning
UHE
UV
Ultra High-Energy
Ultra-violet
VERITAS
Very Energetic Radiation Imaging Telescope
Array System
The Very Large Array
The European Southern Observatory’s Very
Large Telescope
VLA
VLT
WB
WIMP
Waxman-Bahcall
Weakly Interacting Massive Particle
21
1. Introduction
“Somewhere, something incredible is waiting to be known.”
Carl Sagan
Intergalactic Magnetic Field
Neutrino
N
Photon
E
W
Proton
Heavy Nucleus
S
Figure 1.1. Illustration showing the properties of different astrophysical messenger
particles. Starting from the top: Neutrinos (blue) only interact with the weak force
and are hence the ideal cosmic messenger, photons (yellow) may scatter or become
absorbed on their way to Earth, charged particles (green, red) are affected by magnetic
field and do not necessarily point back to their source. Credit: Jamie Yang/WIPAC.
Astroparticle physics denotes the scientific study of elementary particles
and their interactions, in the context of our Universe. It is a branch of particle physics dealing with cosmic messengers such as neutrinos, cosmic-rays,
and high-energy photons, in many cases produced and accelerated in the most
violent and extreme environments of the Universe.
Historically astroparticle physics sprung out of optical astronomy and the
curiosity to understand and reflect upon the Universe we live in: a gigantic
laboratory for astrophysical environments and high-energy physics, that go
beyond what we can produce and measure on Earth.
Some of the fundamental questions addressed are: Which are the fundamental building blocks in the Universe and what are their properties? Can
the elementary particles and their interactions explain the structures seen in
the Universe? What is the origin of cosmic rays? What are the mechanisms
behind the acceleration of particles to extremely high energies?
23
In this introductory chapter we will present the physics motivation for this
thesis, introduce its main actors - the neutrinos - and discuss their unique role
in the exploration of the high-energy Universe. Further we will briefly mention
the efforts to connect different kinds of observations of the same astrophysical
objects into one unified description, known as the multi-messenger approach.
1.1 Physics Motivation
The first humans studied the Universe with their own eyes looking for answers in the optical light emitted by stars and galaxies. In fact almost everything we know today about the Universe is the result of an exploration of the
electromagnetic spectra at different wavelengths. As the techniques and methods grew more sophisticated we were able to study the Universe as seen in
radio, infrared, X-ray and eventually γ-ray radiation, the latter consisting of
extremely energetic particles.
Since photons are massless and electrically neutral they are unaffected by
the magnetic fields in the Universe and travel in straight lines, in space-time,
from the point where they escape their production site to Earth where they
can be observed. Indeed they are excellent messenger particles, carrying information about the physics and constituents of other parts of the Universe.
However, they may be absorbed in dense environments on the way or at the
production site itself, scattered in dust clouds, and captured by ambient radiation and matter in the interstellar medium (ISM).
Astronomy is also possible with charged particles, so-called cosmic rays,
but their directions are randomized by interstellar and intergalactic magnetic
fields for all but the highest energies.
Figure 1.2 shows the energy of particles as a function of the observable distance. The blue shaded region illustrates the parameter space where photons
are absorbed through interactions with the Cosmic Microwave Background
(CMB) and Infrared (IR) background radiation. The red shaded area shows
the region where high-energy protons are absorbed by interactions with the
CMB. Low-energy protons do not point back to their source, due to magnetic
deflection. Further, the upper (lower) horizontal dashed black line indicate the
energy of the highest proton (photon) ever observed. The mean free path of
photons decreases quickly with increasing energy and above 1 PeV (1015 eV)
it is shorter than the typical distances of our own galaxy. The colored bars
in the bottom part indicate the position of the Galactic plane, the local galaxy
group, some of the closest Active Galactic Nucleis (AGNs), etc.
The idea of using neutrinos as cosmic messengers has been around since the
late 1930s. They are electrically neutral and about one million times lighter
than the electron [2]. Further they only interact with the weak nuclear force
and gravitationally. Indeed neutrinos are the perfect astrophysical messenger
with the only drawback that the low cross-section also makes them very diffi24
Figure 1.2. Particle energy as a function of the observable distance. The blue (red)
shaded region illustrates the parameter space where photons (protons) are absorbed
by interaction with the CMB and IR background radiation. The upper (lower) horizontal dashed black line indicate the energy of the highest photon (proton) ever observed. Credit: P. Gorham, 1st International Workshop on the Saltdome Shower Array
(SalSA), SLAC, (2005).
cult to detect. In fact, it turns out that one needs detectors of cubic-kilometer
size to see even a handful of astrophysical neutrinos per year. Further, the
presence of a large atmospheric background requires neutrino detectors to be
built underground.
The first discovery of astrophysical neutrinos was made in the early 1970s,
when Ray Davis and collaborators observed neutrinos coming from the Sun
using a detector at the Homestake Gold Mine in South Dakota. Davis was
awarded the Nobel Prize in 2002 together with Masatoshi Koshiba who led
the design and construction of the Kamiokande (Kamioka Nucleon Decay Experiment), a water imaging Cherenkov detector in Japan which later showed
that the neutrinos actually pointed to the Sun and was further one of the detectors that observed neutrinos from supernova SN1987A in the Large Magellanic Cloud (LMC) in early 1987. These were the first neutrinos ever observed
from outside the Solar system.
Since then, many new experiments have been built and observations show
an ever increasing number of peculiar facts about the neutrinos. Most notably
they seem to have tiny masses giving rise to so-called neutrino oscillations,
25
responsible for their chameleonic nature: they transform from one type into
another as they propagate through space. These observations successfully explained one of the longest standing problems of neutrino astrophysics, namely
the presence of a large deficit in the number of observed electron-neutrinos
from the Sun, see section 2.8.2.
One of the next big breakthroughs in neutrino astrophysics was made in
2013 with the discovery of a diffuse flux of high-energy neutrinos of astrophysical origin by The IceCube Neutrino Observatory (hereafter IceCube) [3].
Localized sources of high-energy neutrinos have not yet been seen outside
of our solar system and remain one of the most wished-for discoveries. In particular since neutrinos are indicators for sites of hadronic acceleration where
some of the most energetic particles in the Universe are thought to be produced
and accelerated.
1.2 The Multi-Messenger Approach
With more pieces of the great cosmic puzzle we can build a greater picture.
This is the essence of what is called the multi-messenger approach in astronomy: combining observations with different messengers and at various energies to reach a deeper understanding of an event or process in the Universe.
In practice this means that traditional observations of electromagnetic radiation are combined with observations of high-energy γ-rays and neutrinos
to provide complementary information. In the future gravitational waves may
play an equally important role.
A stunning example of what can be learned is shown in figure 1.3 where we
present multi-wavelength images of the nearby galaxy Centaurus A, revealing enormous jets of relativistic particles perpendicular to the accretion disc.
Localized objects of neutrino emission may reveal other mind-boggling facts
about the most energetic sources in the Universe.
Further, neutrino flux predictions build upon the close association between
the production and acceleration of cosmic rays and the non-thermal photon
emission from astrophysical sources. This close connection is covered in more
depth in chapter 3.
26
Figure 1.3. Multi-wavelength images of the nearby galaxy Centaurus A. Top right: Xray data from Chandra. Mid right: Radio data from VLA. Bottom right: Optical data
from the ESO’s Wide-Field Imager (WFI) camera at the ESO/MPG 2.2-m telescope
on La Silla, Chile. Left: Combined X-ray, radio, and optical data. Credit: X-ray:
NASA/CXC/CfA/R.Kraft et al; Radio: NSF/VLA/Univ.Hertfordshire/M.Hardcastle;
Optical: ESO/WFI/M.Rejkuba et al.
27
2. The Standard Model
“Young man, if I could remember the names of these particles,
I would have been a botanist.”
Enrico Fermi
The visible Universe consists of mainly three elementary particles; the electron, the up-quark and the down-quark. The two latter are the building blocks
of the atomic nuclei forming the more familiar proton and neutron, while the
electron orbits the nucleus and gives rise to electricity and complex atomic and
molecular bonds. These particles are the fundamental constituents of proteins,
cells, and humans, but also astrophysical objects such as stars, planets and the
vast ISM.
When combined with the electromagnetic force we end up with a set of
tools that can explain most of the physics of everyday life. But it was realized
already in the early 1930s, after careful studies of cosmic-rays (see chapter
3) that more particles and forces were needed to explain the large variety of
radiation and phenomena observed [2].
The first major step towards creating a unified standard model in particle
physics was taken in the 1960s by Glashow, Weinberg, and Salam [4]. They
managed to combine the electromagnetic interaction, Quantum Electrodynamics (QED), with the weak interaction, creating a unified electroweak theory
in the same spirit as Maxwell in the 1860s, who formulated the theory of
the electromagnetic field encompassing the previously separate field theories:
electricity and magnetism.
The modern Standard Model (SM) of particle physics is a collection of
quantum field theories that gives us a unique insight into the structure of matter
in terms of a plethora of 301 elementary particles and the interactions among
them. Elementary particles are particles thought to be definitive constituents
of matter: the smallest building blocks of nature. The SM describes the electroweak force and the strong nuclear force (a.k.a. the strong force), the latter
in the theory of Quantum Chromodynamics (QCD).
Further the SM provides tools to calculate and predict interactions of combinations of elementary particles, e.g., observed in the incredibly rich zoo of
bound quark states, so-called hadrons.
1 This
number is derived by counting the particles shown in figure 2.1: 12 fermions, 12 antifermions, and 6 bosons (the hypothetical graviton not included).
29
Figure 2.1. The Standard Model of elementary particles. Credit: CERN.
The SM is by far the most accurate theory ever constructed [4] and its
crowning achievement was made only years ago in 2012 when the ATLAS
(A Toroidal LHC ApparatuS) and CMS (The Compact Muon Solenoid) Collaborations together announced the discovery of the Higgs particle in data
from proton-proton collisions at the Large Hadron Collider (LHC) complex
at CERN [5, 6].
In the following sections we will present the constituent particles and fields
of the SM. We will discuss the importance of symmetries in the underlying
theories and further explore the weak interaction that is essential in the description of the interactions and phenomena of neutrinos. We conclude the
chapter by discussing theories that go beyond the SM.
2.1 Matter Particles and Force Carriers
The SM is presented schematically in figure 2.1 where each square represent
the observed particle. Each particle is associated with a set of quantum numbers such as e.g. electric charge, mass2 , and spin. These determine the way the
2 Strictly
30
speaking, neutrinos (νe , νμ , and ντ ) are not mass eigenstates [4].
particles interact and propagate through space. For a thorough introduction to
the particles and interactions in the SM see e.g. [4].
The 3 columns on the left hand side of the figure consist of families (also
known as generations) of half-integer spin particles called fermions. These
are often thought of as the particles that constitute matter. The particles in the
first column (u, d, e, and νe ) are stable on large time scales, while the 2nd
and 3rd column contain particles seen almost exclusively at accelerators or in
particle showers resulting from cosmic ray interactions in the Earth’s atmosphere. The three generations are literally copies of the first generation, with
the only observable difference being higher mass for the rare particles. The
two top rows show the six quarks that all have fractional charges. These are
not observed as free particles in nature but rather in bound states of either three
quarks, so-called baryons, or a pair consisting of a quark and an anti-quark, socalled meson3 . This is a manifestation of a phenomenon called confinement
which is a part of QCD. The two bottom rows show the six leptons: three
charged leptons (e− , μ− , and τ− ) each with a corresponding neutral particle,
the neutrino (νe , νμ , and ντ ). As indicated in the figure, the fermions also have
corresponding antiparticles4 .
The right hand side of the figure shows integer spin particles called bosons.
In the SM a fundamental force is the result of continuous exchange of such
force mediating particles. The large rectangles indicate to what extent the
forces act upon the fermions. Starting from bottom we have the weak force
mediators W ± and Z 0 . These couple to particles with weak iso-spin IW and
weak hypercharge Y. In particular, they couple to all fermions and constitute
the only SM coupling to neutrinos resulting in their unique role in astroparticle
physics. The charged part of the weak force mediated by W ± is responsible for
radioactive decays of subatomic particles. The γ (photon) is the mediator of
the electromagnetic force and couples to all charged particles (incl. W ± ), while
the eight (8) massless color-state gluons (g) only interact with the quarks (and
themselves) to give rise to the strong force responsible for the attractive bonds
in hadrons and nuclei. As with the quarks, gluons have never been observed
free in nature, but hypothetical gluon-only states called glue-balls could exist
and might have been observed [8].
The distinction between fermions as the constituents of matter on one side
and bosons as force mediating particles on the other, is somewhat simplified.
E.g. the valence quarks alone cannot account for the mass or spin in composite
particles such as protons. To determine the properties of such bound hadronic
states we need to consider the impact of both the quark- and gluon-sea.
John Wheeler, a famous theoretical physicist known for his large contributions to the field of general relativity and also for coining the term ’wormhole’
3 Indications
for the existence of more exotic configurations have been suggested lately, see [7]
for a recent review on the topic.
4 Neutrinos might actually be their own antiparticles, but this remains to be seen in experiment.
31
60
U(1)
50
40
SU(2)
-1
α 30
20
10 SU(3)
0
2
4
6
8
10
12
Log10(Q/GeV)
14
16
18
Figure 2.2. The inverse of the gauge couplings as function of energy for the three
interactions in the SM (dashed lines) and including SUSY (solid lines), extrapolated
to the GUT scale. The two SUSY lines represent different values for the energy scale
of the SUSY particles (500 GeV and 1.5 TeV). Figure taken from [9].
(theoretical shortcuts in four-dimensional space-time), once said “Mass tells
space-time how to curve, and space-time tells mass how to move”. The interplay between fermions and bosons is very much the same: just like yin
and yang, two opposite and contrary forces that work together to complete a
wholeness.
2.2 Interaction Strengths
The coupling strength between fermions and bosons is determined by dimensionless gauge couplings α, so that the probability of an interaction includes
a factor α for each interaction vertex. Effectively these couplings are energy
dependent, see figure 2.2 where the inverse gauge couplings (‘strengths’) are
shown as function of energy for the three interactions in the SM (dashed lines)
and including supersymmetry (SUSY) (solid lines), see section 2.8.
The energy dependence can be understood in terms of vacuum polarizations where virtual particle-antiparticle pairs with the charge(s) relevant for
each force partially screen the fundamental coupling in the field surrounding
a central charge. For QED the effect gets smaller the closer we get to the
central charge, i.e., the coupling grows with energy. QCD is fundamentally
32
different in that the corresponding gauge bosons, the gluons, themselves carry
(color) charge. The polarization of virtual gluons instead augment the field,
an effect that in the end wins over the color charge screening caused by virtual
quark-antiquark pairs. QCD is therefore referred to as being asymptotically
free in that the bonds become asymptotically weaker with increasing energy.
A similar balancing act is observed for the weak interaction with the selfinteracting W ± and Z 0 , but is less pronounced due to the finite masses of the
gauge bosons. Further the number of gauge bosons is smaller leading to an
additional weakening of the effect.
At energies close to the electroweak scale O(100 GeV), we obtain the following relations between the fundamental forces:
−1
−1
α−1
EM : αW : αS ≈ 128 : 30 : 9,
(2.1)
where all gauge couplings are sufficiently small for perturbation theory to apply. Somewhat lower in energy, at O(1 GeV), perturbation theory breaks down
for QCD and we enter the non-perturbative region with bound hadronic states
and the final levels of hadronization processes in which hadrons are formed
out of jets of quarks and gluons as a consequence of color confinement.
2.3 Symmetries
Symmetries play an essential role in modern physics and in particular in the
SM. The first to understand their significance was Emmy Noether who in 1915
proved that continuous symmetries give rise to conserved quantities [10].
The SM is constrained by a set of local and global symmetries. These constraints are motivated by experimental evidence and observations of particles
interactions.
In section 2.3.2 we describe how local symmetries dynamically give rise
to mass-less force carriers and the conservation of so-called good quantum
numbers. But we will begin by considering a number of global symmetries
in section 2.3.1. In the framework of the SM these are often thought of as
accidental in contrast to the local gauge symmetries.
2.3.1 Global Symmetries
One of the most important laws of conservation is that of the baryon number
B. In particular because it ensures the stability of the proton by prohibiting
processes like5 :
(2.2)
p e+ + νe + ν̄e .
Baryons (anti-baryons) are assigned baryon number B = 1 (B = −1) or equivalently quarks (anti-quarks) can be assigned B = 1/3 (B = −1/3). Mesons,
5 Also
forbidden by lepton number conservation.
33
consisting of a quark and anti-quark pair, therefore have baryon number 0 and
can therefore decay into the lepton sector (e.g. π− → μ− + ν̄μ )6 .
Separate quantum numbers can also be defined for the different quark flavors. These are all seen to be conserved separately for both the electromagnetic and strong interaction. We will see below that the weak interaction couples to isospin doublets with mixing between the three families of quarks and
can therefore violate the individual quark flavor quantum numbers.
To distinguish neutrinos from anti-neutrinos we introduce a total lepton
number L = nl − nl¯, where nl(l)ˆ is the number of leptons (anti-leptons). This
has been observed to be conserved in all known interactions.
Further we observe that separate lepton numbers for each family Le , Lμ ,
and Lτ are also conserved with the exception of considering the oscillations of
neutrino eigenstates in the weak sector. Electromagnetic interactions that violate lepton family numbers, e.g., μ± → e± + γ, are searched for but have never
been observed. The Mu2e (Muon-to-Electron) experiment at Fermilab (US)
began construction in early 2015 and expects preliminary results around 2020
[11]. It will have unprecedented sensitivity for such decays and is particularly
interesting since such conversions are expected at low rates in many models
of physics beyond the SM, e.g., SUSY discussed briefly in section 2.8.
2.3.2 Local Gauge Symmetries
It can be shown that a system described by the Lagrangian L is invariant under
local phase transformations if we add terms to L that cancels the ones from
the derivatives of the local phase itself. This is called the principle of local
gauge invariance [12].
The additional terms introduce matter-gauge field interactions, realized by
the exchange of massless spin-1 bosons as force mediators. In particular they
introduce conserved currents with associated conserved quantum numbers.
E.g. the electromagnetic interaction described by the U(1)7 local gauge symmetry is mediated by the massless γ and further leads to the conservation of
electric charge8 , a quantum number that is non-zero for particles participating
in the electromagnetic interaction. Further, the strong force mediated by eight
massless gluons acts on particles with quantum numbers called color charge, a
consequence of the S U(3)C [4] local gauge symmetry of the terms describing
the strong interaction.
The weak interaction is discussed in detail below in section 2.4. For a more
complete discussion about local gauge symmetries see textbooks in particle
physics and quantum field theory, e.g., [12] and [4].
6 Leptons
have baryon number B = 0.
7 The Unitary group U(n) with n = 1 consist of all complex numbers with absolute value 1 under
multiplication (the so-called circle group) [4].
conservation of charge is in fact a consequence already of global U(1) symmetry.
8 The
34
2.4 The Weak Interaction
The weak force is unique in the following ways: Two of the three force carriers have electric charge, they are all massive (mZ 0 ≈ 91.2 GeV [2], mW ± ≈ 80.4
GeV [2]) unlike the force carriers of QED and QCD, and the interactions do
not conserve parity nor charge conjugation. Further, weak interactions can
be categorized into two separate processes: Charged Current (CC) interactions mediated by W ± and Neutral Current (NC) interactions mediated by
Z 0 . The latter were first identified in the Gargamelle bubble chamber experiment at CERN in 1974 [13] using a beam of muon-neutrinos produced in
pion decay. The Gargamelle Collaboration observed two types of NC events:
muon-neutrino scattering from a hadron without turning into a muon, as well
as events characterized by a single electron track, see figure 2.3. The weak
bosons themselves were discovered at the Large Electron-Positron Collider
(LEP) at CERN in 1983 [14, 15, 16].
The charge current interactions violate parity (mirror symmetry) and charge
conjugation maximally and only interact with left-handed particles and righthanded antiparticles9 . This is condensed into a so-called “vector minus axialvector” (V-A) theory. This chiral theory can be described by an S U(2)L [4]
local gauge symmetry where right-handed particles and left-handed antiparticles have been placed in weak-isospin singlet states, i.e., with weak isospin
0:
(ψe )R , (ψμ )R , (ψτ )R and (ψu )R , (ψc )R , (ψt )R , (ψd )R , (ψ s )R , (ψb )R .
(2.3)
Note that there are no right-handed neutrinos in the SM [4]. Left-handed particles (and right-handed antiparticles) are arranged in weak isospin doublets
differing by one unit of charge:
ψνμ
ψντ
ψu
ψc
ψt
ψνe
,
,
and
,
,
.
(2.4)
ψe L ψμ L ψτ L
ψd L ψ s L ψb L
The flavor eigenstates in the quark sector differ from the corresponding mass
states giving rise to so-called quark mixing, parametrized by the CabibboKobayashi-Maskawa (CKM) matrix. Mixing is also seen in the lepton sector
and is described by the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix.
While charged leptons are eigenstates of both flavor and mass, neutrinos are
not, resulting in neutrino oscillations10 . The mass of the neutrinos is discussed
is section 2.8.2.
9 The
handedness of the particles can either refer to the helicity, the projection of the spin onto
the direction of momentum or as here chirality, an abstract concept related to helicity (for
massless particles they are in fact the same) that describes how particle states transform under Lorentz transformations.
10 We consider the weak mixing to take place between the neutrinos and not the charged leptons
but this is merely a matter of definition. However, mixing among the neutrinos is the natural
choice since mixing between charge leptons would be unobservable technically due to the large
mass splitting.
35
Figure 2.3. The first neutral current event ever detected using the Gargamelle experiment at CERN in 1973. Credit: Picture printed with permission from CERN. Particle
trace by Christine Sutton (Editor, CERN Courier).
Complex phases in these matrices give rise to Charge-Parity (CP) violation effects. This has been observed in the CKM matrix and leads to weak
processes that occur at different rates for particles and mirrored antiparticles.
CP-violations are particularly interesting since they might be connected to the
difference between the content of matter and anti-matter in the Universe. CPviolation has yet to be observed in the lepton sector and is the focus of the next
generation of long-baseline neutrino oscillation experiments.
S U(2)L has three massless gauge bosons W 1 , W 2 , and W 3 [12]. Linear
combinations of the first two can be identified as the charged W-bosons (essentially corresponding to the raising and lowering operators of weak isospin)
and it would be tempting to associate the third boson with the observed Z 0 .
The problem is that W 3 breaks parity maximally unlike Z 0 that does couple to
right-handed particles and left-handed antiparticles although with a different
strength from left-handed particles and right-handed antiparticles. Hence, following the same procedure as for QED and QCD leads to a theory in tension
with experimental observations.
The solution came through the unification of the S U(2)L gauge theory and a
modified version of the electromagnetic gauge theory, U(1)Y with a new gauge
36
field B0 and an associated weak hypercharge Y. With linear combinations of
the fields W 3 and B0 we can describe the observed γ and Z 0 bosons, with the
caveat that the Z 0 should still be massless. Note that although the theory is
unified it does not follow that the gauge couplings for individual components
unite11 , see figure 2.2. However, the theory gives relations between the masses
of the massive bosons as well as for their gauge couplings. These relations are
determined by a parameter, the weak mixing angle θw , experimentally measured to be 28.74◦ [2].
2.5 The Higgs Mechanism
Both the force carriers and the fermions subject to the weak interaction are
massive which is in tension with local gauge symmetry arguments. This means
either that the underlying symmetries must be broken below some very high
energy not yet accessible or that the theory is wrong. Considering the former, we can introduce a scalar field, the so-called Higgs field, to the SM in
a way that spontaneously breaks the underlying S U(2)L ⊗ U(1)Y symmetry
of the electroweak interaction. This triggers the so-called Higgs mechanism:
a beautiful way of giving mass to the boson fields W ± and Z 0 through their
interaction with the Higgs field.
In the SM spontaneous symmetry breaking can happen when a field acquires a non-zero vacuum expectation value. Considering a simplified case
(following [12]) of a global U(1) symmetry of the Lagrangian, for a complex
scalar field Φ = √1 (Φ1 + iΦ2 ) with the appropriate choice of potential, a so2
called ‘Mexican-hat’, it can be shown that we obtain an infinite set of minima
along a constant radius. This is depicted in figure 2.4 where the potential is
illustrated as a function of the real scalar components Φ1 and Φ2 . The vacuum
state is the lowest energy state of the field Φ and can be chosen in the real
direction (Φ1 , Φ2 ) = (v, 0) without loss in generality.
The Lagrangian for the complex scalar field can be rewritten in terms of two
new scalar fields (η and ξ) following the expansion around the minimum12 .
From this we can identify several interaction terms between η and ξ, but more
importantly both a massive and a massless scalar field. The latter corresponds
to a so-called Goldstone boson predicted by Goldstone’s theorem [17], and
can be identified with excitations in the direction along the constant circle
of minima. The former instead describes excitations in the radial direction
corresponding to a massive gauge boson.
These arguments can be promoted to hold for the local gauge symmetry
U(1) by replacing the standard derivatives with appropriate covariant derivatives at the cost of introducing a new massless gauge field Bμ . The result is
theory is described by the product group S U(2)L ⊗ U(1).
Quantum Field Theory (QFT) we define particles as excitations of the participating fields
when expanded around the vacuum state.
11 The
12 In
37
again a massive scalar field η, a massless Goldstone boson ξ and additionally
a mass term for Bμ . Hence, spontaneous symmetry breaking can give mass to
the gauge boson of the local gauge theory. η can be eliminated from the Lagrangian by making an appropriate gauge transformation and it can be shown
that the η is “eaten” by the massive gauge boson to provide the longitudinal
polarization, see e.g. [12].
For the full electroweak symmetry we introduce a weak isospin doublet of
complex scalar fields φ = (φ+ , φ0 ), called the Higgs field. The field φ has four
real-valued components φi (i = 1, 2, 3, 4). The Lagrangian is given by:
2
LHiggs = (∂μ φ)† (∂μ φ) − (μ2 φ† φ + λ(φ† φ) )
(2.5)
where the second term is the Higgs potential. For μ2 < 0 it has an infinite set of
degenerate minima. By construction, the Lagrangian is invariant under global
transformations of both U(1)Y and S U(2)L . The two components in the doublet differ with one unit of charge just like the left-handed fermion doublets
in the weak charge current interaction. Since the photon should remain massless, the minimum of the Higgs potential should be chosen to give a non-zero
value in the neutral component φ0 only. The Lagrangian can be made symmetric under the full local gauge symmetry by introducing covariant derivatives
involving both Bμ and Wμi (i = 1, 2, 3) fields.
The spontaneous symmetry breaking of the full electroweak Lagrangian results in the observed mixing of the Bμ and Wμ3 fields when considering the
masses of the physical gauge bosons. Technically the breaking gives a massive scalar and three massless Goldstone bosons. The latter correspond to the
longitudinal polarizations of the massive electroweak bosons W ± and Z 0 . The
massive scalar is called the Higgs boson. When such a particle was discovered
at CERN in 2012, it provided an important missing piece of the electroweak
puzzle13 . The combination of the unified electroweak theory with the Higgs
mechanism is known as the Glashow-Weinberg-Salam model.
2.6 Fermion masses
Fermion masses are particularly tricky in the SM since such terms are not
invariant under the electroweak gauge symmetry. The Higgs mechanism generates the masses of the massive weak bosons but can also give mass to the
fermions. Couplings between the Higgs field and the fermion fields, constructed through combinations of left- and right-handed particle states, can be
shown to generate gauge invariant mass terms at the strength of flavor dependent so-called Yukawa couplings proportional to the vacuum expectation value
of the Higgs potential (v ∼ 246 GeV) and the fermion masses.
13 Note
that the actual mass of the Higgs boson was not predicted by the theory. The window of
possible masses was rather narrowed down by over three decades of experimental observations.
38
V (φ)
A
B
Re(φ)
Im(φ)
Figure 2.4. An illustration of the Higgs potential in a simplified model: the classic
shape of a Mexican-hat.
The coupling to a fermion with mass m f can be expressed as [12]:
gf ∝
mf
.
v
(2.6)
The couplings to the massive fermions turns out to be quite different in
size. E.g. the coupling to taus are in the order of O(1) while the coupling to
electrons are O(10−6 ).
Further, without right-handed neutrinos we cannot generate mass to the
neutrinos through the Higgs mechanism at all and even if such particles exist, the corresponding Yukawa couplings would be 10−12 . This might hint to
that the neutrino masses are provided by a different mechanism. The so-called
see-saw mechanism is a well-loved possibility for such, see section 2.8.2.
2.7 The Parameters of the Standard Model
The expanded SM including also neutrino oscillations has 26 free parameters [12]. These are all measured by experiments and include the twelve (12)
Yukawa couplings mentioned in section 2.6 or equivalently the fermion masses
(mν1 , mν2 , mν3 , me , mμ , mτ , mu , . . . ), the three (3) gauge couplings of the
electromagnetic, weak, and strong interactions, the Higgs vacuum expectation
value and Higgs mass, four (4) parameters (mixing angles, phases) for each
of the CKM and PMNS matrices and finally a strong CP phase related to a
fine-tuning problem in the theory of the strong interaction [12]. The latter is
not covered in this thesis, see e.g. [12].
39
We note that the fermion masses (except for the masses of the neutrinos)
are of the same order within each family and we have already seen in figure
2.2 that the coupling strengths of the gauge fields are similar in size at the
electroweak scale. Indeed there seems to be patterns among these parameters
and it is tantalizing to think of them as hints of physics and symmetries beyond
the SM.
2.8 Beyond the Standard Model
Although incredibly successful the SM is not the end of particle physics.
A growing number of phenomena cannot be explained within the current framework but requires extensions of the SM: most notably the observation of neutrino masses, the only firm evidence for physics beyond the SM. These topics
are discussed below, in particular we discuss the notion of neutrino oscillations
and means to give the neutrinos mass.
The flavor sector of the SM is a source of several unexplained phenomena.
E.g. there is no deeper explanation of why there are exactly three families of
fermions and why the PMNS matrix is relatively flat compared to the close to
diagonal CKM matrix. Further, the CP violation terms in these matrices are
likely too small to explain the observed matter-antimatter asymmetry in the
Universe.
Figure 2.5. Tip of the iceberg. Illustration of the energy budget of the Universe. Only
about 5% consists of ordinary visible matter such as electrons, protons, and photons.
About 25% consist of so-called dark matter and the remaining part of mysterious socalled dark energy. Credit: [18].
40
GUT attempt to unify the electroweak interaction with the strong interaction, i.e., with focus on restoring the apparently broken symmetry between
colorless leptons and colored quarks [2]. The simplest GUT would be S U(5)
which is the smallest Lie group that can contain all of the symmetries of the
SM (U(1)Y ⊗ S U(2)L ⊗ S U(3)C ) [2]. This gives rise to twelve (12) colored
X-bosons with GUT scale masses, which generally cause problems with the
stability of the proton, and predicts a weak angle θW in tension with observations. Note that GUTs lead to a common gauge coupling for all three (3)
interactions above some high energy scale O(1016 GeV) where the symmetry
is broken. This unification of forces becomes even more accurate when the
framework of SUSY is included.
SUSY is an extension of the SM, building on the success of considering symmetries as a means to understand interactions and derive conservation laws. By incorporating a fundamental symmetry between fermions and
bosons, it doubles the number of particles in the SM: each SM particle gets
a super-particle companion differing by half a unit in spin. In particular it
provides several viable dark matter particle candidates, see section 2.8.1 and
references therein. Since we have not yet observed such super-particles at the
same mass scale as the SM particles, this symmetry is thought to be broken at
some unknown energy scale.
The governing force in the Universe is gravity, but at lengths typical for
particle physics and relevant at the subatomic scale, it is negligible, with a
strength that is 1040 times weaker than the electromagnetic force. There is
nothing a priori that says that the strengths should be different, hence this constitutes a so-called hierarchy problem. To solve this, some theories include
extra-dimensions into which parts of the initially strong gravitational interaction leak. Attempts to include gravity in the SM include so-called supergravity
theories where the quantum world and the general theory of relativity are combined into a single framework. The corresponding gauge particle is called the
“graviton” but has never been observed.
Another hierarchy problem arises in the Higgs sector where a large finetuning is required to obtain the observed bare mass O(100 GeV). The gauge
and chiral symmetries of the SM prevent the spin-1 and spin-1/2 particles from
getting mass (except for the dynamical creation described in section 2.5) but
there is nothing that protects the spin-0 Higgs particle. Since SUSY connects
a scalar to a fermion, it can provide such protection by introducing complementary quantum correction terms [4].
2.8.1 The Dark Sector
Physics beyond the SM is also found by studying the large-scale structures
of the Universe. Measurements of velocity dispersions in clusters of galaxies
[19], galaxy rotational curves [20, 21, 22], weak gravitational lensing [23], and
41
Figure 2.6. “Pandora’s cluster” - A collision of at least four galaxy clusters revealing a plethora of complex structures and astrophysical effects.. Credit: Xray: NASA/CXC/ITA/INAF/J.Merten et al, Lensing: NASA/STScI; NAOJ/Subaru;
ESO/VLT, Optical: NASA/STScI/R.Dupke.
the CMB [24] provide experimental evidence for a dark sector that dominates
the energy budget of the Universe.
The CMB reveals tiny fluctuations in the temperature (energy) of photons
from different arrival directions. These photons were produced at the time
of recombination about 380,000 years after the Big Bang. These fluctuations contain an imprint of the large-scale structures in the early Universe that
started as primordial quantum fluctuations and were then amplified during an
epoch of rapid expansion known as inflation and further by gravitational forces
and the expansion of the Universe. The magnitude and spatial structure of
the fluctuations reflect dynamics requiring much more gravitational mass than
what can be provided for by the visible Universe. In particular, they show
that baryonic non-luminous matter like brown dwarfs cannot make up but a
fraction of the dark matter abundance.
In fact, ordinary baryonic matter only accounts for about 5% [24] of the
total energy density of the Universe. 26% consists of so-called dark matter
and the remaining 69% seems to correspond to some kind of dark energy [24].
The two dark components are distinct in that the former originates from the
observed mismatch between luminous and gravitational matter while dark energy is introduced to explain the accelerating expansion of the Universe, i.e.,
dark matter clusters while dark energy does not. In the standard model of cos42
mology ΛCDM, dark energy is attributed to a non-zero so-called cosmological
constant that emerges in the equations of general relativity. The concept of
dark energy is beyond the scope of this thesis, but we’ll say a few words about
dark matter since it is potentially closely related to the physics described by
the SM.
If dark matter consists of some kind of particles these could be a relic density of stable neutral particles that decoupled in the early hot Universe as it
expanded and cooled down (for a review of these concepts see e.g. [25]).
Particles that were relativistic (non-relativistic) at the time of freeze-out constitute so-called hot (cold) dark matter. Hot dark matter, e.g. SM neutrinos,
cannot make up a large fraction of the dark matter content since they would
have ‘free-streamed’ out of small density enhancements and suppressed them.
Cold dark matter is very interesting and compatible with N-body simulations
in which one simulates the large scale formation of the Universe. Particular
interest has been directed towards so-called Weakly Interacting Massive Particles (WIMPs). These arise in a large number of theories, such as SUSY and
theories with extra curled-up dimensions.
Figure 2.6 shows the so-called ‘Pandora’s Cluster’14 located about 3.5 billion light years from Earth. It shows the collision of at least four galaxy clusters in a composite image consisting of X-ray data from Chandra in red tracing
the gas content (gas with temperatures of millions of degrees) and gravitational
lensing data from the Hubble Space Telescope (HST), the European Southern
Observatory’s Very Large Telescope, and the Subaru telescope tracing the total mass concentration in blue on top of an optical image from HST and VLT.
The center region shows a separation between gas and matter, where the former slows down as a result of friction in the collisions of the hot gas clouds,
induced by the electromagnetic force. The main fraction of the matter only
seems to interact gravitationally giving empirical proof of WIMPs.
2.8.2 Neutrino masses and oscillations
The neutrinos included in the SM are massless and left-handed. Historically
this agreed well with observations and the latter even holds today: only lefthanded neutrinos have been observed in nature. But experiments studying
neutrino oscillations have confirmed that neutrinos are indeed massive albeit
with a tiny mass in the O(1 eV) [2].
Neutrino oscillations have been seen in measurements of neutrinos from the
Sun, from nuclear reactors and from neutrinos produced in the atmosphere.
The first indication came from a deficit in the flux of solar neutrinos compared
to the Standard Solar Model and was measured in Ray Davis’ Homestake Experiment in the late 1960s [26]. The concept of neutrino oscillations had been
14 Officially
known as Abell 2744.
43
put forward by Bruno Pontecorvo already in 1957 and was later revised by
Pontecorvo, Maki, Nakagawa, and Sakata.
The observations of the Super-Kamiokande (Super-K) detector in Japan and
the Sudbury Neutrino Observatory (SNO) in the U.S., around the turn of the
millennium, provided the first evidence for neutrino oscillations. For this discovery, Takaaki Kajita and Arthur B. McDonald, were rewarded the Nobel
prize in physics 2015.
Considering vacuum mixing between two flavors: the probability to measure a certain flavor νβ given να is a functions of the total particle energy E and
time t. Since neutrinos are close to relativistic we can parametrize t in terms
of distance travelled L. The probability is given by [27]:
Δm2 [eV2 ]L[km]
2
2
,
(2.7)
P(να → νβ ) = sin (2θ) sin 1.27
E[GeV]
where Δm2 = m22 − m21 and θ is the mixing angle. This result can be generalized
to three flavors by considering the full PMNS matrix [27].
The results of oscillations with three flavors in vacuum is shown in figure
2.7 given oscillation parameters consistent with current measurements, see
figure caption. The plots show the probability for a muon-neutrino to appear in any of the three neutrino flavors as a function of the ratio L/E. The
upper plot show a zoom in on small values of L/E relevant for atmospheric
oscillations while the lower plot show a larger range relevant for solar oscillations. The effect of neutrino oscillations is only relevant for macroscopic
distances and can safely be ignored at the small distances typical for e.g. collider experiments, see equation 2.7. The oscillation probability is dramatically
increased if we consider neutrino propagation in matter through the so-called
MSW (Mikheyev-Smirnov-Wolfenstein) effect [27]. In particular this effect
was crucial for understanding and solving the solar neutrino problem, a deficit
in the number of electron-neutrinos arriving at Earth.
In order to give mass to the particles through Yukawa couplings to the Higgs
field we need terms with mixed Dirac fermion fields (right- and left-handed).
But even if we were to include right-handed neutrino fields (ψνl )R (l = e, μ, τ)
(or the corresponding left-handed antineutrino fields) in the SM the Higgs couplings would have to be extremely small to explain the small neutrino masses.
These neutrinos would only interact gravitationally and are therefore referred
to as ‘sterile’ neutrinos15 . Sterile neutrinos can be searched for in neutrino
oscillation experiments, where they would mix with the left-handed particles.
That effect is particularly strong for propagation in a dense medium such as
the Earth [27].
Right-handed neutrinos and left-handed antineutrinos are invariant under
the SM gauge symmetries, why we can add terms to the Lagrangian formed
15 Technically
the term ‘sterile’ neutrino is used to distinguish any non-interacting neutrino from
the ‘active’ neutrinos that participate in the SM.
44
Figure 2.7. Probability for a muon-neutrino to appear in any of the three neutrino
flavors as a function of the ratio L/E. The upper plot show a zoom in on small values
of L/E relevant for atmospheric oscillations while the lower plot show a larger range
relevant for solar oscillations. The black line illustrates the probability to observe the
particle as an electron-neutrino, blue a muon-neutrino, and red a tau-neutrino. The
oscillation parameters used are consistent with current measurements (sin2 θ13 = 0.10,
sin2 θ23 = 0.97, sin2 θ12 = 0.861, δ = 0, Δm212 = 7.59 · 10−5 eV2 , and Δm232 ≈ Δm213 =
2.32 · 10−3 eV2 ). Normal hierarchy is assumed.
45
by their fields without breaking the local gauge invariance. In fact, the full
neutrino mass Lagrangian can be written as a combination of Dirac (mass
terms generated by the spontaneous symmetry breaking) and so-called Majorana mass terms. The Majorana term can in principle lead to processes with
total lepton number violation that could be searched for in experiments, but
since the helicity and chirality is almost identical for neutrinos such processes
are suppressed [12]. Instead experiments focus on observing neutrinoless double β-decays, a process that is only possible for so-called Majorana neutrinos,
neutrinos that are their own antiparticles.
The Lagrangian is defined as:
⎞⎛
⎛
⎞
⎜⎜ 0 mD ⎟⎟ ⎜⎜ νc ⎟⎟
1
⎟⎟⎟ ⎜⎜⎜ L ⎟⎟⎟ + h.c.
(2.8)
LDM = −
νL νcR ⎜⎜⎜⎝
2
mD M ⎠ ⎝ νR ⎠
where the eigenvalues are m± = 12 (M ± M 1 + 4m2D /M 2 ) and νL,R are the leftand right-handed neutrino states respectively and νcR/L is the CP conjugate of
νR/L [12].
If mD would be of the same order as the lepton masses and M mD ,
then m− ≈ mD · (MD /M) and m+ ≈ M, i.e., m− gets suppressed by the scale
of M. This way of giving the observed neutrinos light masses is called the
see-saw mechanism and typically predicts extremely high masses (in GUT
M ∼ 1015 GeV) for the other eigenstate, far above the reach of current experiments [12]. If Majorana mass terms exist, the mechanism predicts one massive
neutrino O(M) for each light SM neutrino O(1 eV).
Summary
After this review of the SM and in particular of some of the particle physics
aspects of the neutrinos, we are ready to introduce neutrinos in a wider context;
namely to discuss the important role of neutrinos in the determination of the
origin and production of cosmic rays.
46
3. The Cosmic Ray Puzzle
“Thus it is possible to say that each one of us and all of us
are truly and literally a little bit of stardust.”
William A. Fowler
The Quest for the Origin of the Elements, 1984
The Earth’s atmosphere is constantly bombarded by charged particles and
nuclei, so-called Cosmic Rays (CRs), at the astonishing rate of 1,000 particles
per square meter per second [28]. These are primarily protons and helium
nuclei with a small contribution from electrons.
In general these particles are relativistic but they arrive with a huge variation
of kinetic energies, the most energetic ones with an energy as large as that of
a brick falling from a roof. The highest energies observed so far belongs to
so-called Ultra High-Energy (UHE) CR that have an energy of O(1020 ) eV
[29].
When CRs where discovered in the early 1910s they provided the first evidence of a more exciting Universe consisting of something more than the stars
and gas observed in optical telescopes. The discovery was made through a
series of high-altitude balloon flights by Victor Hess (1911-12, < 5.3 km) [30]
and subsequently by Werner Kolhörster (1913-14, < 9 km) [30], both measuring an increasing flux of ionizing particles with increasing altitude, thus ruling
out the Earth itself as a source. Their discovery became one of the biggest
breakthroughs in particle physics and started an era of CR experiments that
would lead to the discovery of many new particles, most notably the positron
(anti-electron) in 1932 [31], the muon in 1936 [32], the pion in 1947 [30], and
later kaons and lambdas that were the first particles discovered that contained
the strange quark [30].
The majority of the CRs, were confirmed to have positive charge which was
illustrated by studies of the so-called east-west effect, the result of deflection in
the geomagnetic field [33, 34, 35]. It was also realized in the late 1930s (Rossi,
Auger, Bethe, Heitler) that the particles we observe at the ground are in fact
so-called secondaries, produced by primary CR interactions with air nuclei
causing electromagnetic and hadronic cascades, that propagate like showers
in the atmosphere, see figure 3.1.
47
Figure 3.1. Artistic impression of a CR entering Earth’s atmosphere where it
interacts with air nuclei to form electromagnetic and hadronic showers. Credit:
CERN/Asimmetrie/Infn.
Countless experiments have been built since then and they have provided us
with many new clues about the nature of CRs, in particular about their composition and propagation through the ISM. But the most important questions
remain partly unanswered: the origin and the production mechanisms are still
to a large extent unknown.
Two important features of the CR data that can be used to learn more about
their origin is their composition, i.e., the relative abundance of different nuclei, and their energy spectra. While the latter may be a characteristic of a
particular acceleration mechanism, the former can be used to infer what kind
of sources are involved in the acceleration process. This is done by comparing the composition models and data from different astrophysical objects [28].
Further, the identification of a localized source of CRs, either direct of indirect,
may provide the most important clue, in particular if the CRs are observed in
connection with known emitters of electromagnetic radiation such as X-rays,
γ-rays, etc.
Galactic CRs are usually modeled by the so-called leaky box model. It
assumes that CRs are confined by magnetic fields within the galactic disk
(∼ 10−7 G) but gradually leak out. Since the resulting gyro-radii of CRs below 1015 GeV (‘the knee’) are much smaller than the size of the Galaxy, they
become trapped for times on the order of 106 years [30]. Scattering on inhomogeneities in the magnetic fields randomizes their directions and leads to a
high degree of isotropy, hence they will have no memory of their origin when
reaching Earth.
Since CRs above the knee cannot be contained by the galactic magnetic
fields they are not trapped for long enough time to accelerate to high energies.
48
Figure 3.2. CR anisotropy observed by IceCube using 6 years of data. The plots
are preliminary and show the relative intensity in equatorial coordinates. The median
energy of the primary CR is shown in the upper left corner of each plot. Each map
was smoothed using a 20◦ radius. Interestingly the deficit and excess switch positions
as we go above ∼ 100 TeV. The reason for this is unknown. Figure is taken from [36].
They are assumed to be extragalactic of origin or alternatively associated with
strong local sources.
Anisotropies in the arrival directions of CRs can be introduced either through
large magnetic fields with significant deviation from an isotropic diffusion
(through modified propagation models), by local sources, or by signifiant escape from the Galactic disk1 . The predicted anisotropy from these effects
features large regions of relative excess and deficit of number of events, with
amplitudes on the order of 10−2 to 10−5 [38]. Further there are effects due
to the motion of the observer relative to the CRs, e.g. due to the motion of
1 At
TeV energies propagation effects could give rise to large-scale anomalies [37], but it is not
clear if these can also explain observations made at higher energies.
49
the Earth relative to the Sun, and potentially a so-called Compton-Getting effect [39], a dipole anisotropy due to the relative motion of the solar system
with respect to the rest frame of the CRs2 . However, the predicted amplitude and phase of the latter is not consistent with data from current air shower
experiments [40], i.e., if the Compton-Getting effect is present in data it is
overshadowed by stronger effects and may be one of several contributions to
the CR anisotropy.
Indeed, observations of CR arrival directions show a nearly isotropic distribution at most energies [2]. However, energy dependent anisotropies at the
level of ∼ 10−3 have been observed by several experiments amongst others:
IceCube [40] and Milagro [41] (for median primary CR energies of 6 TeV).
Figure 3.2 show the CR anisotropy observed by IceCube using 6 years of data.
The plots are preliminary and show the relative intensity in equatorial coordinates for three different median energies. Interestingly the deficit and excess
switch positions as we go above ∼ 100 TeV. The reason for this is still unknown.
In the following sections we will focus on the observed energy spectra and
the possible acceleration mechanisms and sources involved in the production
of high-energy CRs. In particular we will investigate the close link between
the acceleration of high-energy CRs (> 100 MeV) and the production of highenergy astrophysical neutrinos in section 3.2. For a more complete review of
CR, see e.g. [28, 30, 38].
3.1 Energy Spectrum
One of the most striking facts about CRs is their enormous span in energy and
flux. The spectrum covers several decades in energy, from 108 to 1020 eV, and
flux, from 104 to 10−27 GeV−1 m−2 s−1 sr −1 . The spectrum approximately3
follows an unbroken power-law E−γ with a slope γ ≈ 2.7 above 1010 eV. Further it has several interesting features; a knee around 1015 − 1016 eV, a second
knee around 1017 eV, an ankle around 1018 − 1019 eV and finally a sharp cutoff above 1020 eV. Figure 3.3 shows the flux of CRs multiplied with E2 as
a function of kinetic energy. Further, on the horizontal axis, it illustrates the
typical collision energy of man-made accelerators such as the LHC at CERN
and Tevatron at Fermilab (Fermi National Accelerator Laboratory). Above the
knee we observe roughly 1 particle per m2 per year, while above the ankle we
only observe 1 particle per km2 per year.
The spectral features can be seen more clearly in figure 3.4 that shows the
CR flux multiplied by E2.6 . This represents the differential energy spectrum
as a function of reconstructed energy-per-nucleus, from direct measurements
2 Note
that the rest frame of the Galactic CRs is not known.
spectral index depends on the compilation of data, in particular the different composition
models assumed [29].
3 The
50
Figure 3.3. CR particle flux as a function of energy. Credit: Bakhtiyar Ruzybayev/University of Delaware.
below 1014 eV and from air-shower events above 1014 eV. The lowest energy
CRs can be detected directly, while the flux is so low at high-energies that
a direct measurement is no longer feasible and we instead study air-showers
produced when CRs interact in the atmosphere.
Air-showers are detected using mainly three different techniques: number
density and lateral distances of charged particles on the ground are measured
in air-shower arrays, Cherenkov radiation from the charged particles as they
propagate in the atmosphere, is measured in air Cherenkov detectors, and fluorescence detectors study Ultra-violet (UV) light created as CRs interacts with
primarily nitrogen in the atmosphere. In modern facilities these techniques are
usually combined, see e.g. the Pierre Auger Observatory (PAO) [42] utilizing
1,600 surface Cherenkov detectors combined with 24 fluorescence detectors,
together covering a detection area of 3,000 km2 .
At low energies, E < 10 GeV, the CR spectrum is largely influenced by
the solar wind consisting of electrons and protons with energies in the range
51
1 < E < 10 keV [2, 29]. At even lower energies the spectrum is modulated by
Earth’s geomagnetic field. Further, in the event of a solar flare, the spectrum
is dominated by particles from the Sun in the range of a few tens of keV to
few GeV. The low-energy CR abundance is hence largely dependent on both
location and time.
While CRs with energies between a few keV to a few GeV are known to
be created in large amounts in the Sun, the birthplace of the high-energy CRs
(a few GeV to 1020 eV) is still under debate. Indeed, the origin of CRs is a
rather complex question mainly because many different processes are likely
at play simultaneously throughout the full energy spectrum. One of the key
questions is to figure out whether the spectral features are caused by processes
happening during creation (acceleration) or propagation through the Universe.
Typically we associate events to the left of the knee to the Galaxy. We will
see in section 3.5 that shock acceleration in Supernova Remnants (SNRs) can
explain particle energies up to PeV (∼ 1015 eV) energies. Further, events to the
right of the knee are thought to show the onset of an extragalactic component.
Objects such as AGNs and Gamma-Ray Bursts (GRBs) have the necessary
power to provide the energy in this region.
Knee
104
2nd Knee
103
E 2.6 F(E) [GeV
1.6
m-2 s-1 sr -1]
Grigorov
102
JACEE
MGU
Tien-Shan
Tibet07
Akeno
CASA-MIA
HEGRA
Fly’s Eye
Ankle
Kascade
Kascade Grande
IceTop-73
10
1
13
10
HiRes 1
HiRes 2
Telescope Array
Auger
1014
10
15
10
16
10
17
18
10
10
19
20
10
E [eV]
Figure 3.4. CR particle flux as a function of energy-per-nucleus. The data points show
the all-particle flux from a number of air-shower experiment. Figure from [2].
52
3.2 The Origin of High-Energy Cosmic Rays
Technically CRs are divided into different categories based on their production
site, but this division is not sharp and often not decisive4 .
We will use the following definition: primary CRs denote particles that are
accelerated in astrophysical sources: protons, electrons, and nuclei such as
helium, carbon, oxygen, and iron. Secondary CRs denote particles that are
created through interactions of primary CRs with the gas and dust in the ISM,
in the Earth’s atmosphere or in regions close to the accelerators themselves,
so-called astrophysical beam dumps.
When comparing the composition of CRs with the abundance of elements
in the Solar system, we see two main differences. In general there is an enrichment, in the CRs, in the heaviest elements relative to H and He. This is not
entirely understood but could be due to a different composition at the sources
[28]. Further, we observe two groups of secondaries - Li, Be, B and Sc, Ti, V,
Cr, Mn - having a much larger relative abundance among the CRs compared to
the relative abundance in the solar system. These two groups of elements are
rare end-products of stellar nucleosynthesis, but are present in the CR flux as
a result of so-called spallation of primary CRs in the ISM5 . Further, the ratio
of secondary to primary nuclei is observed to decrease with energy [2]. This is
interpreted in terms of high-energy particles escaping the Galaxy to a higher
degree. These observations give us a handle on the parameters in propagation
models and the lifetime of the CRs in our Galaxy.
Another source of secondary CRs is the production of electron-positron
pairs, that becomes relevant for protons above 1 EeV, and further dominates
the energy loss up to around 70 EeV where pion-production takes over.
CRs with energies above 50 EeV may interact with Cosmic Microwave
Background (CMB) photons to produce pions through p + γCMB → Δ+ →
n(p)+ π+ (π0 ). This provide the mechanism for the so-called Greisen-ZatsepinKuzmin (GZK) cut off [47, 48], potentially the reason for the sharp cut-off in
the energy spectra of CRs around 50 EeV [49, 50]. The neutrinos produced
through the decay of these charged pions are very highly energetic and are
referred to as GZK-neutrinos. The GZK flux prediction is sensitive to the
fraction of heavy nuclei present in the UHE CRs. These lose energy through
photo-disintegration and hence lower the neutrino production efficiency [51].
The search for GZK-neutrinos has been conducted with a large variety of experiments: large radio telescopes such as e.g. LUNASKA (The Lunar UHE
4 E.g.
the excess of positrons measured by PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) and later confirmed by Fermi and the AMS-01 (Alpha
Magnetic Spectrometer)-01 and AMS-02 experiments at the International Space Station (ISS)
has been interpreted as evidence for dark matter [43], in terms of modified propagation models
[44], and as evidence for a nearby astrophysical source [45]. See [29] and references therein for
a brief review of the topic.
5 In this context spallation refers to the process in which nuclei emit nucleons after being hit by
incoming high-energy particles.
53
Neutrino Astrophysics using the Square Kilometer Array) searching for radio
pulses from neutrino-induced particle cascades in the Moon [52], balloon experiments such as the ANITA (Antarctic Impulse Transient Antenna), satellite
experiments such as FORTE (Fast On-orbit Recording of Transient Events)
[53], etc. Future detectors include the ARA (Askaryan Radio Array) [54, 55]
and the ARIANNA (Antarctic Ross Ice Shelf Antenna Neutrino Array) [56].
These all search for radio signals created by the so-called Askaryan effect [57].
Further, the SPATS (South Pole Acoustic Test Setup) studies acoustic signals
formed in dense neutrino-induced cascades in the Antarctic ice, see e.g. [58]
for a review on this topic.
In section 3.3 and 3.4, we will show how secondary CRs are produced in regions close to astrophysical sites and the relatively dense environments of the
Earth’s atmosphere. These sites also produce high-energy neutrinos, the main
topic of this thesis. Figure 3.5 illustrate the sources of neutrinos through-out
the Universe. These include both natural sources such as the hypothetical relic
neutrinos from the Big Bang (’Cosmological ν’), solar neutrinos, atmospheric
neutrinos and neutrinos produced in violent processes related to supernova
bursts and AGNs, as well as man-made sources, such as reactor neutrinos
produced in radioactive decays of heavy isotopes used for fission energy generation.
Figure 3.5. Measured and expected neutrino fluxes from several sources, both natural
and man-made. The incredible energy range covers both time-dependent and timeindependent sources from μeV all the way up to EeV. The figure is taken from [46]
with kind permission from Springer Science and Business Media.
54
Figure 3.6. Cross-section for photo-production of neutral and charged pions. The
single-pion production (black solid and red dashed line) is dominated by the resonance
of Δ+ at mΔ ≈ 1232 MeV [60]. Figure from [60].
3.3 Astrophysical Neutrinos
Astrophysical neutrinos can be produced as a result of interactions between
accelerated primary CRs and the matter and radiation fields that surround the
acceleration site. The important processes are the hadronic reactions6 :
p + γ → Δ+ → n(p) + π+ (π0 )
(3.1a)
p + p → p + n(p) + π+ (π0 )
(3.1b)
where equation 3.1a is commonly referred to as a photo-production process
where the protons interact with ambient photons, typically produced by accelerated electrons. Photo-production is the foundation of the so-called Δapproximation, see figure 3.6, where we assume that the proton-γ cross-section
can be described by an analytic Breit-Wigner resonance at mΔ ≈ 1232 MeV
[60]. Equation 3.1b is referred to as an astrophysical beam dump process
due to the similarity with accelerator-based experiments [29]. The balance
between these reactions is related to the cross-section and number density of
each particle present [2].
Neutrinos are produced in the decay of charged pions, e.g., π+ → μ+ + νμ
(99.99%) followed by μ+ → e+ + νe + ν̄μ (≈100%). The typical energy for each
of the three neutrinos is: Eν ∼ 1/4 Eπ ∼ 1/20 E p , i.e., protons at the CR knee
creates ∼ 100 TeV neutrinos [61]. γ-rays are produced in the decay of neutral
pions π0 → γ + γ (98.82%). Further, escaping neutrons may decay through the
6 This
set of reactions can also proceed with neutrons and/or higher mass mesons and baryons
in the final states. Further, see [59] for a review of channels with resonances.
55
weak interaction n → p + e− + ν̄e (100%). All branching ratios are taken from
[2].
The energy emission of astrophysical sources consists of escaping highenergy protons, γ-rays, and neutrinos, all with highly correlated continuous
non-thermal energy spectra. The pion spectrum from photo-production, and
hence the neutrino spectrum, follows closely the primary proton spectrum in
case the photons are thermal, but is somewhat steeper if the process proceeds
with synchrotron radiation, produced by electrons loosing energy in strong
magnetic fields [62]. Note that these arguments require that the sources are
transparent enough for protons to interact at least once before decaying and
that mesons decay before interacting. For optically thin sources photons are
emitted in coincidence with the neutrinos, but for thick sources they avalanche
to lower energies until they escape, giving rise to a distribution of sub-TeV
photons. In models one typically normalize the expected neutrino flux to the
observed flux of γ-rays or CRs given assumptions of the production efficiency
and absorption in the sources [63].
The neutrino flux for a generic transparent source of extragalactic CRs is
called the Waxman-Bahcall (WB) limit or WB bound [64, 65]. By integrating the energy spectrum from figure 3.4 above the ankle assuming an E −2
spectrum and a GZK cut-off at 50 EeV we arrive at a CR energy density
ρ ∼ 3 × 10−19 erg cm−3 [63] or ∼ 3 × 1037 erg s−1 Mpc−3 over the Hubble time
1010 years. Converted to power for different populations of sources this gives
[66]:
• ∼ 3 × 1039 erg s−1 per galaxy,
• ∼ 2 × 1042 erg s−1 per cluster of galaxies,
• ∼ 2 × 1044 erg s−1 per AGN, or
• ∼ 2 × 1052 erg per cosmological GRB.
These numbers are of the same order of magnitude as the observed electromagnetic emission from the listed sources [63]. Further, they are consistent with
a transparent model where the most important process is the photo-production
through the Δ-resonance and the energy emission is approximately equally distributed among the CRs, γ-rays, and neutrinos. These sources have emerged as
leading candidates for the extragalactic component of the CRs and the derived
neutrino flux is loosely confined in the range [63]:
E2ν dΦWB /dEν = 1 ∼ 5 × 10−8 GeV cm−2 s−1 sr−1 ,
(3.2)
where ΦWB is the WB bound. The precise value depends on a number of
assumptions: the minimum energy considered in the integration of the CR
spectrum, details of the production process (in particular the energy distributions), cosmological evolution of the CRs sources, etc. For details see [63],
[59] (Δ-approximation), [67] (simulations of pγ processes), and [68] (simulations of pp processes). The Δ-approximation is crucial in order to construct
56
approximate pion-production cross-sections and to determine gross features
like the number ratio of γ to neutrinos produced [59].
A similar exercise can be done for the Galactic part of the CR energy spectrum, leading to an energy density ρ ∼ 10−12 erg cm−3 (∼ 1 eV cm−3 ) [28, 63].
Assuming a volume, corresponding roughly to the size of the Galactic disk,
V = πR2 d ∼ π(15 kpc)2 (200 pc) ∼ 4 · 1066 cm3 we can derive the power required to accelerate these galactic CRs [28]:
W=
Vρ
∼ 5 · 1040 erg s−1 ,
τ
(3.3)
where τ is the active time in the source region, typically of the order of 106
years [28]. Interestingly this power estimate is similar to the power provided
by typical supernova scenarios, if we assume a supernova rate corresponding
to one every 30 years and that roughly 10% of the power produced by supernovae, each releasing 1051 erg, can be used to accelerate new particles through
the expanding shock wave [63]. The predicted neutrino rate give a handful of
detectable neutrinos per decade of energy per km2 and year [63], assuming
that the sources extend to 100 TeV with an E−2 spectrum.
The relative ratio of different neutrino flavors produced according to equations 3.1 is 1:2:0 for νe :νμ :ντ . Due to flavor oscillations during propagation
this implies that the astrophysical neutrinos arriving at Earth should have the
ratio 1:1:1 given the so-called long baseline approximation [69, 70].
The observation of localized neutrinos from known γ-ray emitters would
undoubtedly confirm the existence of hadronic accelerators. However, recently the Fermi collaboration published results identifying hadronic accelerators solely based on the presence of γ-rays produced from the decay of π0
in the Spectral Energy Distribution (SED) [71]. Assuming an E−2 spectrum
the pion-decay bump is characterized by a sharp rise at 70-200 MeV, tracing
the parent population above ∼ 1 GeV [72]. The height depends on the maximal
proton energy and the fraction of energy transferred to the γ-rays.
The standard model for production of high-energy photons is the so-called
Self-Synchrotron Compton (SSC) mechanism, see e.g. [73]. This constitutes
the basis for leptonic production where high-energy photons are produced
through so-called synchrotron emission, caused by the movement of electrons
in the strong magnetic fields of jets. The generated photon spectrum is peaked
in the IR to X-ray region, and constitute the so-called synchrotron peak. The
synchrotron photons subsequently get accelerated by the same parent electron
distribution through Inverse Compton (IC) scattering. Further, high-energy
photons are also generated through Bremsstrahlung as charged particles are
decelerated. These latter two contributions result in photon energies in the
range GeV-TeV. The non-thermal emission spectra is hence characterized by
a double peak distribution, see e.g. figure 3.7. Note that the SSC model has
an intrinsic energy limit set by the decrease of the IC cross-section in the
Klein-Nishina regime [74]. This is in contract to the hadronic scenarios where
57
IC 443
E2 dN/dE (erg cm-2 s-1)
10 -10
10 -11
Best-fit broken power law
Fermi-LAT
VERITAS (30)
MAGIC (29)
AGILE (31)
π0-decay
Bremsstrahlung
Bremsstrahlung with Break
10 -12
8
9
10
10
10
1011
10
1012
Gamma-ray flux E2 dN/dE (erg cm-2 s-1)
Energy (eV)
π0 decay
Mixed model
10-10
Inverse Compton
Bremsstrahlung
Bremsstrahlung with Break
10-11
10-12
10-13
10-7
-5
10
-3
10
10-1
10
3
10
5
10
107
9
10
1011 1012
Energy (eV)
Figure 3.7. Spectral Energy Distribution (SED) of SNR ‘IC 443’. The figures show
best-fit values to Fermi-LAT data (circles) of both leptonic and hadronic components
(the radio band is modeled using synchrotron emission), the upper figure showing a
zoom in on the γ-ray band. The solid lines represent the best-fit pion-decay spectrum,
and the dashed lines show the best-fit Bremsstrahlung spectrum. The inverse Compton
component is shown as a long-dashed line in the lower plot, together with a mixed
model (Bremsstrahlung and pion-decay), shown as a dotted line. See details in legend.
Figure taken from [71]. Reprinted with permission from AAAS.
58
Figure 3.8. Best-fit neutrino spectra from IceCube global fit paper [77], assuming
a single power-law and including all flavors. The red (blue) shaded are correspond
to 68% C.L. allowed regions for the astrophysical (atmospheric conventional) flux
component, while the green line represent the 90% C.L. upper limit to the prompt
atmospheric flux fitted to zero. Figure taken from [77].
the maximal energy is set by the maximum proton acceleration energy of the
source.
The leptonically produced photons can outnumber the photons from the
hadronic production by large amounts, in particular for sources with a very
low matter density around the acceleration region. Sources known to be interacting with clouds of dense material therefore provide good targets for where
to detect evidence of CR acceleration. SNRs interacting with molecular clouds
belong to this class of interesting sources, producing γ-ray via the decay of
neutral pions produced in the dense gas surrounding the sources [75]. One
of the best examples of such SNR-cloud interactions in the Galaxy is SNR
IC-443, with an estimated age of about 10,000 years, located at a distance of
1.5 kpc [76]. The plots in figure 3.7 show the measured energy spectrum for
SNR ‘IC 443’. The circles show the best-fit values to Fermi-LAT data of both
leptonic and hadronic components. The solid lines denote the best-fit piondecay γ-ray spectra. See further details in figure caption. The conclusion from
this search was that the γ-rays are most likely produced through the hadronic
channels, while leptonic production can still account for the synchrotron peak.
These models predict a neutrino luminosity that can be searched for by neutrino detectors like IceCube.
59
The IceCube Collaboration reported the detection of a diffuse flux of astrophysical neutrinos in 2013 [3, 78]. Evidence for astrophysical neutrinos were
also seen in a separate analysis looking for a diffuse astrophysical neutrino
flux using tracks from the Northern hemisphere [79]. The best-fit all-flavor astrophysical neutrino flux calculated using a global fit to six different IceCube
searches is given as [77]:
−18
GeV−1 cm−2 s−1 sr−1 (at 100 TeV).
φ = (6.7+1.1
−1.2 ) · 10
(3.4)
where γ = 2.50 ± 0.09 [77] (valid for neutrino energies between 25 TeV and
2.8 PeV [77]). The atmospheric-only hypothesis was rejected with 7.7 σ assuming a χ2 -distribution. Apart from a statistically weak cluster close to the
Galactic center, the arrival directions of these neutrinos are consistent with an
isotropic distribution, but the actual origin is unknown. Since the events extend to large Galactic latitudes many suggestions include extragalactic sources
such as AGNs embedded in molecular clouds, intense star-formation regions,
etc. See [80] for a review and references therein. Note that since high-energy
photons have a short absorption length such emission is not to be expected in
association with the IceCube flux, unless there is a significant Galactic contribution [61]. Instead, we can search for association with the possible sub-TeV
extension of the signal, as mentioned in the beginning of this section.
3.4 Atmospheric Backgrounds
Atmospheric muons constitute the by far most dominant part of the event yield
in large-volume underground particle detectors such as IceCube. Analyses
searching for astrophysical neutrino-induced muons starting inside of the detector are particularly plagued by atmospheric muons mimicking truly starting events. Further, most analyses also struggle with a close to irreducible
background of atmospheric neutrinos. The mimicking muons are particularly
troublesome for searches for low-energy neutrinos, relevant for this work, or
the diffuse astrophysical flux, but less so when searching for clusters of localized high-energy neutrinos. In this section we discuss these backgrounds and
the important interplay between decay and interaction that takes place in the
Earth’s atmosphere.
The interactions of high-energy nuclei and protons in the Earth’s atmosphere give rise to electromagnetic and hadronic air-showers, see illustration
in figure 3.9. The CR flux through the atmosphere can be approximated with a
set of coupled cascade equations [2], but Monte Carlo simulations are needed
to accurately account for the decay and interactions of the secondary particles
as well as the spectral index of the primary CRs.
60
The main reactions generating muons and neutrinos are:
p + N → π± (K ± ) + X
→ μ± + νμ (ν̄μ )
→ e± + νe (ν̄e ) + ν̄μ (νμ ),
(3.5)
where p is a proton, N is the target nucleus, and X represents the final state
hadron(s). The decays of hadrons induce both electromagnetic cascades of
high-energy photons, electrons, and positrons as well as highly penetrating
muons and neutrinos.
The balance between interaction and decay in the atmosphere is energy dependent. The critical energy is defined as the energy where the interaction
probability equals the decay probability, under the assumption of an isothermal atmosphere. The so-called conventional flux of muons and neutrinos is
primarily produced in decays of π± or K ±7 , with critical energies of 115 GeV
and 855 GeV [81], respectively. Above the critical energy these particles tend
to interact in the atmosphere before they decay, why a steepening (softening)
of the muon spectra with about one power relative to the primary spectrum
of CRs, is observed above these energies, i.e., γconv. ∼ 3.7 [28]. Note that
this effect is less pronounced for zenith angles close to 90◦ since the mesons
then travel longer in the low density of the upper atmosphere where they do
not interact [2]. The flux from semi-leptonic decays of short-lived charmed
hadrons (generally heavy quark particles with lifetimes smaller than 10−12
s) is referred to as prompt and is characterized by a flatter muon spectrum,
i.e., relatively more muons at higher energies. This is because the charmed
hadrons, in contrast to the conventional component, in general decay before
losing energy in interactions. E.g., D± has a critical energy of 3.8 · 107 GeV
[29]. Current estimates of the cross-over, from the conventional to the prompt
component, predict that it occurs ∼ 1 PeV, see e.g. [82]. Consequently the
muon and neutrino spectra below O(100 GeV) where decay dominate follow
the CR spectrum and are well described by a power-law spectrum:
dΦν
∝ E −γ ,
(3.6)
dEν
where γ = 2.7. Above O(100 GeV) the spectra become steeper eventually
reaching γ = 3.7.
While the conventional production, described in equation 3.5, produces
neutrinos and anti-neutrinos with flavor ratio νe :νμ :ντ = 1:2:0, the prompt component yields approximately equal amounts of electron and muon flavored
neutrinos8 . Further the ratio of νe /νμ decreases with energy above O(GeV)9 ,
7 The
fraction of pion- relative to kaon-induced muons is energy dependent. About 8% (20%)
of vertical muons come from decay of kaons at 100 GeV (1 TeV) (27% asymptotically) [28].
8 The large mass of τ suppresses the production of ν and ν̄ from atmospheric processes.
τ
τ
9 This is in contrast to the lower density environments considered in section 3.3 where the ratio
stays constant to much higher energies [28].
61
Figure 3.9. Production of muons and neutrinos in the Earth’s atmosphere.
62
due to the increasing muon decay length [28]. Further, there is an asymmetry
in the yield of neutrinos compared to anti-neutrinos. This is due to the excess
of positively charged pions and kaons in the forward fragmentation region of
proton-initiated interactions, and further due to the larger amount of protons
over neutrons in the primary CR spectrum.
Particle Flux at Sea Level
Since muons are relatively long-lived (livetime 2.2 μs [2]) some penetrate to
sea level and further through kilometers of Earth material such as ice, rock, and
water. The muon flux on the surface is around 100 particles m−2 s−1 sr−1 . The
abundance of particles in the air-shower as it moves through the atmosphere
is shown on the left side of figure 3.10. Muons and neutrinos dominate the
picture at sea level and any remaining electrons, hadrons, or photons will be
absorbed in the Earth material. At the depths of IceCube the muons outnumber
the atmospheric neutrinos by a factor of ∼ 106 . The plot on the right side
of figure 3.10 shows the vertical intensity of muons as a function of depth
expressed in kilometers water equivalent (km w.e.). IceCube is located at a
depth of about 1.8 km w.e. and is subjected to a large number of atmospheric
muons from the Southern hemisphere. An efficient way to reduce this flux is
to only study particles from the Northern hemisphere, where the Earth itself
can be used as a filter, blocking most muons. At about 20 km w.e., the muon
flux becomes constant, constituting an isotropic neutrino-induced background.
3.5 Acceleration Mechanisms
The power-law nature of the CR energy spectrum discussed in section 3.1 is
indicative of non-thermal10 processes where a relatively small number of particles is accelerated by a focused energy out-flow from a powerful source [30].
In this section we will discuss mechanisms needed to transfer this macroscopic
energy to individual particles, in particular the so-called standard model of CR
acceleration: Diffusive Shock Acceleration (DSA).
The acceleration of high-energy particles in the Solar system occurs both on
large scales, e.g. in interplanetary shock waves associated with the solar wind,
and in the vicinity of point sources such as the particle acceleration to GeV energies observed in connection with solar flares [28]. Similarly, both extended
and point sources are likely to play a role in the acceleration of Galactic and
extragalactic CRs.
It is also possible that acceleration (initial or further) takes place during
propagation, through interactions with large gas clouds containing magnetic
irregularities. In general, acceleration to high-energies (TeV energies and
above) requires a massive bulk flow of relativistic charged particles [63]. The
10 Emission
not described by a black body spectrum with a given temperature.
63
Altitude (km)
15
10000
10
5
3
2
1
0
[m–2 s–1 sr–1]
1000
_
νμ + νμ
+
μ + μ−
Vertical flux
100
10
p+n
1
1
5
10
e+ + e−
π+ + π−
0.1
0.01
2
0
200
400
600
800
1000
Atmospheric depth [g cm–2]
1
10
100
Figure 3.10. Left: Vertical CR fluxes in the atmosphere above 1 GeV (displays the
integrated flux above 1 GeV for all particles except electrons, for which the threshold
Ee > 81MeV applies instead) as a function of atmospheric altitude/depth. Data points
show experimental measurements of μ− . Figure taken from [2]. Right: Vertical muon
intensity as a function of kilometer water equivalent depth. For data points see caption
of figure 28.7 in [2]. The shaded area at large depths shows neutrino-induced muons
with an energy above 2 GeV. Figure taken from [2].
presence of relativistic gas in the ISM has been established through mainly
two key observations: synchrotron radiation induced by relativistic electrons
(and protons) and the presence of γ-rays from the decay of neutral pions.
Earth-bound accelerators use electric fields to give energy to particles typically confined in circular orbits by strong magnetic fields. But in most astrophysics environment, such electric fields would immediately short-circuit
since the particles are in a plasma state with very high electric conductivity [29]. Instead, we consider so-called shock acceleration where low-energy
particles are given higher energies through repeated stochastic encounters with
magnetic irregularities.
We start by deriving the basic concept of shock acceleration, closely following the discussion in [28]: A test particle undergoing stochastic collisions
gains an energy ΔE = ξE in each encounter. After n such collisions the energy is En = E0 (1 + ξ)n , where E0 is the initial (injection) energy. Assuming an
escape probability, Pesc , from the finite acceleration region per encounter, the
proportion of particles with an energy greater than E is [28]:
−γ
1 E
,
N(> E) ∝
Pesc E0
64
(3.7)
where
γ≈
Pesc 1 T cycle
,
≡
ξ
ξ T esc
(3.8)
where we have introduced T cycle as the time for each acceleration cycle n and
T esc as the time for escape from the acceleration region. These two parameters
are characteristics of the source considered and in general energy dependent.
The maximal energy provided Emax = E0 (1 + ξ)t/T cycle where t is the total acceleration time. A power-law spectrum emerges from the stochastic nature of
this process in combination with the finite probability of escape, but a definite
prediction of the spectral index is not provided [28].
Fermi acceleration is a mechanism to transfer macroscopic kinetic energy
of moving magnetized plasma to single charged particles, and builds upon the
simple model of shock acceleration described above. It requires that particles
are magnetically confined to the source region for sufficiently long time to
reach the required energies. Further, the encounters have to be collision-less in
order for the mechanism to be effective, otherwise collisions with surrounding
particles would cause too large energy losses for a net acceleration to take
place.
In the original version, Fermi considered charged particle interactions with
irregularities in dense magnetic clouds, so-called magnetic mirrors, depicted
to the right in figure 3.11. Here the scattering angle is uniform and particles
can either lose of gain energy depending on the angle. The average energy gain
per collision ξ is second-order in the velocity of the cloud, and therefore this
is referred to as second-order Fermi acceleration. There are several problems
with this type of mechanism. In particular, observations show a too low cloud
density and the random velocity of interstellar clouds is very small [29], i.e.,
this is an inefficient mechanism to accelerate particles to high energies.
Figure 3.11. Left: Fermi acceleration of 1st order. Right: Fermi acceleration of 2nd
order.
65
The current baseline model is instead associated with particle acceleration
in strong shock waves11 , DSA. Evidence for strong shocks has been found,
e.g., in the supersonic shells of supernova remnants propagating in the interstellar medium [83].
3.5.1 Diffusive Shock Acceleration
In the DSA model we consider a shock wave that moves through a magnetized
plasma into which we inject a test particle that traverses the shock wave headon, entering the shocked region from the unshocked region, see left side of
figure 3.11. The particle undergoes repeated elastic scattering on magnetic
field irregularities in the region of shocked gas (not due to collisions with other
particles) and on average ends up co-moving with the bulk of the shocked
gas. A cycle is completed if the particle subsequently scatters back to the
unshocked region, gaining the same energy as for the initial transition [28].
In contrast to second-order Fermi acceleration where a particle both can
lose and gain energy as it traverses the magnetic mirrors, the collisions with
the shock front are always head-on and lead to an increase in energy. The
key requirement is the presence of a strong shock wave and that the velocities
of the relativistic charged particles in the medium are randomized on either
side of the shock so that they are co-moving with the medium. The latter
is achieved through the elastic scattering on the magnetic field irregularities.
This is the clever aspect of this acceleration mechanism, the same dynamics
take place at passage on either side of the shock front.
The average energy gain from such a cycle of a non-relativistic shock can
be shown to be [28]:
4
(3.9)
ξ ∼ β,
3
where β is the relative velocity between the shocked and unshocked gas, i.e.,
the energy gain is proportional to the initial energy, hence leading to a powerlaw spectrum as illustrated in the beginning of the section. The acceleration
continues in cycles until the particle escapes. The escape probability in this
case translates to the probability for particles to scatter away from the shock
front. Further, this kind of acceleration predicts a specific spectral index [28]:
dN
(3.10)
∝ E −2
dE
The CR escape probability increases with energy, which leads to the slightly
softer spectra γ ∼ 2.7 as observed at Earth [84].
DSA is thought to play an important role for the acceleration of particles
up to the knee at 1015 − 1016 eV, but fails to explain the flux above this limit
11 A shock wave is formed when the speed of material exceeds the speed of sound in the medium.
This propagating disturbance is characterized by a nearly instantaneous drop in pressure, temperature and density.
66
Figure 3.12. Hillas diagram of potential source of acceleration. Crab indicates the
Crab nebula and IGM the Intergalactic Magnetic Field. Reprinted by permission from
Macmillan Publishers Ltd: Nature [85], copyright 2009.
[29]. In particular, for energies above ∼ 1018 eV, even the most exotic Galactic
candidates for acceleration are deemed inefficient [29].
3.6 Potential Acceleration Sites
We saw in section 3.3 and 3.5.1 that Galactic SNRs can provide the power
needed for accelerating CRs to energies up to 1015 − 1016 eV through DSA.
The most prominent candidates for the highest-energy particles ( 1018 eV)
include extragalactic sources such as GRBs and the accretion disks and relativistic jets of AGNs. More exotic scenarios (so-called top-down scenarios)
include decays of magnetic monopoles and the Z-burst mechanism where neutrinos with extremely high energy interact with relic neutrinos from the Big
Bang at the resonance for the neutral weak gauge boson Z 0 producing CRs in
subsequent decays [86]. The latter has been excluded by neutrino experiments
such as ANITA, a radio-balloon experiment in Antarctica [87, 88].
Diffuse fluxes of neutrinos are generally believed to be due to either unresolved point sources or a truly diffuse flux, i.e., neutrino production that is
not localized or limited to a particular source, but that instead may be associated with CR hadrons and electrons interacting with gas and photon fields in
the same way as the dominant diffuse γ-ray emission from the Galactic plane
[29]. Further, so-called hidden sources may exist, where neutrinos are the only
67
particles escaping and the rest is absorbed in the source. Such sources cannot
be the sources of CRs but may still provide insight regarding the interactions
in the vicinity of the sources.
We saw in the beginning of section 3.5 that the sources of acceleration are
generally characterized by their active time and the maximal energy they can
produce. In particular, higher energies can be achieved with extreme magnetic
fields or a long shock lifetime. Hillas criterion is a way to constrain potential
sources by considering the size of the acceleration region and the magnetic
field present in it. The maximal attainable particle energy of a CR nucleus of
charge Z accelerated in a magnetic field strength B with size L can be written
as [29]:
B
L
·
EeV,
(3.11)
Emax ∼ Z · β ·
1 μG
1 kpc
where β is the velocity of the shock front in terms of c. This condition is
illustrated in figure 3.12, showing magnetic field strength B as a function of
source size L for a large number of potential high-energy accelerators. The
solid (dashed) line represent a proton (an iron nucleus) with energy E = 1020
eV. Only source above each line can accelerate the corresponding particle to
energies above 1020 eV.
In the remainder of this section we will discuss several different source
candidates that can provide the power needed to reproduce the CR power-law
spectrum from a few GeV to 1020 eV. For a more complete list of candidate
sources see e.g. [84] and [89]. The basic assumption for all of the sources presented below is that they are optically thin. This means that we can correlate
the predicted neutrino emission with the observed TeV γ-ray emission in the
case of a hadronic acceleration scenario as mentioned in section 3.3 [89]. We
will briefly mention GRBs in section 3.6.2 but otherwise focus on sources of
steady emission, relevant for this thesis.
3.6.1 Galactic Sources
Supernova Explosions/Remnants
Scenarios emerging from the event of a supernova include some of the best
candidates for neutrino emission in the Galaxy. A supernova is a stellar explosion triggered either by the gravitational collapse of a massive star or through
the re-ignition of a degenerate star, e.g., a white dwarf accumulating matter
from a companion eventually reaching the point of so-called runaway nuclear
fusion [90]. These are extremely bright events that briefly outshine the host
galaxy and release a large amount of energy through the emission of MeV
neutrinos prior to the optical burst [89].
68
Such neutrinos were detected from supernova SN 1987A12 , mentioned in
section 1.1. Further, a burst of neutrinos in the TeV range is predicted from
proton-nuclei interactions in shock-fronts as they break out of their progenitor
star after core collapse [91].
Supernovae are quite rare, and only occur on average every thirtieth year
in our Galaxy. Due to their bright emission they have become important in
the historical perspective. E.g. the Sumerians discovered the Vela supernova
6,000 years ago and the left-over, the SNR, is clearly visible today, constituting a prime candidate for CR acceleration [92]. Further, in 1054 the Crab
supernova was observed by Chinese astronomers [92]. The Crab nebula is
particularly important since it has a close to time-independent brightness and
hence can be used as a calibration source, a standard candle in γ-ray astronomy.
SNRs consist of the remaining structure of ejected material resulting from
a supernova explosion. Neutrino emission is expected from shock acceleration in both shell-type SNRs and Pulsar Wind Nebulae (PWNs) [89]. The
former is bound by an expanding shock wave driving into the ISM at supersonic velocity, accelerating particles through DSA, and producing radio and
X-ray emission via synchrotron radiation. The more energetic emission, at
above MeV, can originate from inverse-Compton scattering, Bremsstrahlung
or neutral pion decay. Hadronic origin is in general not established.
12 Located
in the outskirts of the Tarantula Nebula in the Large Magellanic Cloud.
Figure 3.13. Shell of supernova remnant RX J1713.7-3946 revolved at TeV γ-ray
energies by HESS (HESS point spread function is shown in the bottom left corner).
The black contour show the X-ray surface brightness in the range 1-3 keV observed
by ASCA. Credit: HESS Collaboration.
69
The life of a SNR involves mainly two phases that are of interest for γ-ray
and neutrino emission. The first phase is the free expansion of the ejected
material that lasts typically 2-10 years [74]. Hadronic acceleration and interaction take place in the high density of the expanding envelope [74] as it
interacts with the surrounding ISM. This is followed by the so-called SedovTaylor phase that can last several hundred years. This is when most of the
particles escape the acceleration region and enter the Galaxy as CRs [29].
PWNs also eject material into the ISM, creating a shock wave where particles are accelerated. These objects are believed to be closely connected to
the birth and evolution of SNRs. PWNs are thought to be powered by rapidly
rotating pulsars with very strong magnetic fields, dissipating their rotational
energy over several thousands of years. These pulsars are likely the remnants
of old core-collapse supernovae, i.e., rotating neutron stars with a beamed
lighthouse-style emission. They are observed in radio, X-rays, and γ-rays.
The emission is produced as the pulsar wind interacts with the surrounding
material to form a shock front [93].
Many SNRs have now been resolved by γ-ray telescopes such as HESS
and we can see the shock fronts as a shell surrounding the source, see figure
3.13. SNRs in regions of dense molecular clouds have confirmed hadronic
processes [71] and therefore constitute prime candidates for sources of neutrinos produced in our Galaxy, see figure 3.7 and the discussion in section 3.3.
Microquasars
High-energy neutrinos could be produced in the relativistic jets of Galactic
microquasars as suggested in [94, 95]. These compact X-ray binary systems
consist of a stellar-mass black hole or a neutron star (M > 3Mo ) that cannabalizes a non-compact companion star [84]. The characteristics of microquasars
strongly resemble the ones of AGNs with an accretion disk and relativistic jets,
but on a much smaller scale (the black holes involved in AGNs have ∼ 108 Mo ).
Assuming that CRs are accelerated in the jets, neutrinos could be produced
through the hadronic processes discussed in section 3.3. The energy could
come from either the accretion of matter from the companion star or as magnetic dipole radiation (neutron stars have extremely strong (1012 G) surface
magnetic fields) [84]. The recent detection of TeV γ-rays from such objects
has demonstrated that these are indeed sites of effective acceleration of particles up to multi-TeV energies [95]. It remains to be seen if the acceleration is
leptonic and/or hadronic in origin.
Molecular Clouds and Diffuse Emission
Neutrinos can be produced by the decay of charged pions produced in CR
interactions with the Galactic disk, resulting in a diffuse emission. The astrophysical sources themselves may be hidden behind dense regions of matter
and radiation. HESS has observed extended regions of γ-rays from known
directions of molecular clouds (star formation regions) [96]. The location of
70
these can be traced with e.g. Planck CO observations. This data supports
the hadronic hypothesis and suggests that the source is compatible with the
emission from an embedded supernova explosion ∼ 104 years ago [96].
3.6.2 Extragalactic Sources
We saw already in section 3.6 that many extragalactic sources can contribute
to the CR spectrum at the highest energies. These include: AGNs, GRBs,
starburst galaxies (galaxies with unusually high star formation rate that can
act as calorimeters for CRs produced in SNRs [80]), magnetars (neutrons stars
with very strong magnetic fields), galaxy clusters (where shocks form either
through accretion of matter or in termination shocks of galactic winds [97]),
etc. In this section we will focus on the two prime source candidates for CR
production and acceleration: AGNs and GRBs.
Active Galactic Nuclei
AGNs are the most luminous steady sources observed in the Universe. The
strong electromagnetic radiation is believed to be caused by matter falling
into a super-massive black hole (M ∼ 108 Mo ) in the galactic center, forming
an accretion disk [62]13 . They emit electromagnetic radiation in the form of
X-rays, γ-rays and optical light, and many have time-dependent periods of
enhanced emission of the order of days or weeks. Due to the wide range of
radiation seen, these objects are prime targets for multi-messenger astronomy
as discussed in section 1.2.
Many AGNs have relativistic jets perpendicular to the accretion disk (collimated outflows of accelerated gas). High-energy neutrinos may be produced
as the accelerated particles interact with X-rays or other photons in the core
region of the AGN or in the jets, but the relation between the luminosity of
these different types of particles is unknown and depends on the models assumed. These models are highly complex. A review including derivations
of the maximum energy threshold, various energy losses and normalizations
can be found in [84]. The so-called unified AGN model aims to explain the
characteristics observed from a large variety of astrophysical objects, as due
to different viewing angles of AGNs [98], i.e., the observed electromagnetic
emission depend on the orientation of our line of sight to the accretion disk and
jet of the super-massive black hole. The unified model is illustrated in figure
3.14. This model seeks to explain a large variety of sources, amongst others:
Seyfert galaxies, BL Lacs, and Flat-Spectrum Radio Quasars (FSRQs). For a
review of such astrophysical objects and their characteristics, see e.g. [98].
luminosity can range from 1040 erg s−1 (nearby galaxies) to 1047 erg s−1 (distant point
sources, so-called quasars) [29].
13 The
71
Gamma Ray Bursts
GRBs are the most luminous transient objects in the Universe and have been
proposed as possible candidates for high-energy neutrinos [99]. For the duration of the burst they outshine the entire Universe. They are observed isotropically in the sky with a rate of about 1 per day. They are usually classified by
their burst time: short GRBs on average last 0.3 s and long on average 30 s,
and are thought to arise due to somewhat different mechanisms [29].
Short GRBs might be due to the merger of compact binary systems such as
neutron stars. They are associated with regions of little or no star formation
and have no association to supernovae. Long GRBs have bright afterglows
and can be linked to regions of active star formation and in many cases to
core-collapse supernovae, black holes with strongly collimated jets [29].
GRBs have non-thermal electromagnetic emission than can be divided into
three phases: a less bright precursor phase 10-100 s before the bright so-called
prompt phase that is subsequently followed by an afterglow.
Figure 3.14. An illustration of the unified AGN model, that aims to explain the characteristics observed from a large variety of astrophysical objects, as different viewing
angles of AGNs [98]. Credit: ‘Active Galactic Nuclei’, V. Beckmann and C. Shrader,
2012. Copyright Wiley-VCH Verlag GmbH & Co. KGaA. Reproduced with permission.
72
The so-called fireball model is a widely used theoretical framework for
models of GRBs. A centralized explosion initiates a relativistic blast wave that
propagates through the previously ejected material surrounding the source.
Electrons, protons, and γ-rays are produced and accelerated in the shocked
regions when shells of material moving at different speeds collide [74]. Highenergy neutrinos ( 100 TeV) are produced as a result of pγ-interactions. The
physics is similar to that of jet acceleration in AGNs, the main difference being
the short time scales and much higher boost in energy [74]. IceCube has put
strong bounds on the fireball model and excluded large parts of the parameter
space, see [100].
Summary
We have now reviewed the theoretical background and motivation for neutrino
astronomy as well as discussed the primary candidate sources. Before we go
on to discuss the details of the analysis and tools used, we elaborate a bit
on how neutrinos can be detected, in particular using cubic-kilometer-scale
ice/water Cherenkov detectors such as e.g. IceCube. This is discussed in the
next chapter.
73
4. Neutrino Detection Principles
“I have done a terrible thing,
I have postulated a particle that cannot be detected.”
Wolfgang Pauli
Since neutrinos only interact gravitationally and with the weak force there
is a limited way in which they can be detected. Experiments like IceCube
and ANTARES detect neutrinos through the charged particles they produce in
interactions with atoms in natural ice/water or the nearby bedrock. Similarly,
large under-ground detectors such as Super-K detect particles produced in the
interactions with purified water.
These interactions proceed either through charged weak interactions where
a W ± boson acts as force mediator, so-called CC interactions, or through neutral weak interactions where a Z 0 boson acts as force mediator, so-called NC
interactions. We focus on the semi-leptonic processes whose pictorial form
proceed with the interactions:
νl (ν̄l ) + Y → l− (l+ ) + X
(CC)
(4.1)
νl (ν̄l ) + Y → νl (ν̄l ) + X
(NC)
(4.2)
where l is a charged lepton (e, μ, τ), Y represents either a nucleus, nucleon, or
quark depending on the kinetic energy of the incoming neutrino and X is the
final hadronic state. The corresponding Feynman diagrams for the underlying
neutrino-quark interactions are depicted in figure 4.1. Note that lepton flavor
conservation imposes that the produced lepton is of the same family as the
incoming neutrino.
Electron anti-neutrinos may interact directly with electrons through the
Glashow resonance ν̄e + e− → W − . This occurs at neutrino energies close to
6.3 PeV [101], where the center of mass energy ECM ∼ MW , see figure 4.2.
The final states may be hadronic and/or electromagnetic showers. Further socalled pure muon ν̄e + e− → ν̄μ + μ− and lollipop ν̄e + e− → ν̄τ + τ− signatures
are possible.
75
νl
l−
ν̄l
W±
l+
W±
d
u
u
d
νl
νl
ν̄l
ν̄l
Z0
u, d
Z0
u, d
u, d
u, d
Figure 4.1. Feynman diagrams of neutrino-quark interactions through charged (upper
row) and neutral (bottom row) currents.
In the following sections we will focus on the topics relevant for neutrino
detection in IceCube. We start by discussing the cross-section of neutrinos
and then describe how coherent Cherenkov radiation is created as the result of
polarization in the detector medium. We saw in section 3.4 that the muons can
travel long distances before stopping. The mechanism behind the energy loss
of muons is covered in section 4.3.
4.1 Neutrino Cross-Section
At the lowest energies, Eν < 1 MeV, neutrino interactions proceed through
thresholdless processes such as elastic scattering of free electrons, coherent
scattering with nuclei and neutrino capture [101].
Slightly higher, up to Eν ∼ 100 MeV, the neutrinos start to probe the inner
structure of the nuclei to interact with individual nucleons. This include primarily inverse beta decay ν̄e + p → e+ + n but also more complex interactions
with collective excitations and bound nuclear states [101].
The description of neutrino scattering becomes increasingly more diverse
and complex at even higher energies, Eν ∼ 0.1 − 20 GeV [101], where several
different mechanisms compete: elastic and quasi-elastic scattering, resonant
pion production, and Deep Inelastic Scattering (DIS). In the first two of these,
neutrinos scatter off an entire nucleon that may change quark composition
76
Figure 4.2. Neutrino cross-section as a function of energy. Figure taken from [102].
(only in CC interactions) but does not break up. In resonant production neutrinos excite the target nucleon creating baryonic resonances, e.g. Δ or N ∗ states,
that subsequently decay creating one or several final state pions. The neutrino
cross-section, averaged over neutron and proton, is illustrated as a function of
energy (above 10 GeV) in figure 4.2.
DIS is the process where neutrinos can start to resolve the internal structure
of the nucleon targets and interact with individual valence quarks giving rise
to hadronic showers as the nucleons break up. The matrix element for antineutrino-quark scattering is different from that of neutrino-quark scattering.
This is due to the chiral nature of the weak force, the V-A coupling discussed
in section 2.4, as it interacts with the particles of the target material and leads
to an interaction that is about a factor two stronger for neutrinos compared to
anti-neutrinos1 . DIS becomes dominant for energies, Eν ∼ 20 − 500 GeV.
The neutron and proton averaged DIS differential cross-section for a target
N with mass M is given as [103]:
d2 σ
dxdy
⎧ 2
⎪
2
⎪
MW
⎪
⎪
⎪
2
[xq(x, Q2 ) + xq̄(x, Q2 )(1 − y)2 ] (CC)
⎪
2
Q2 +MW
G2F MEν ⎪
⎨
= π ⎪
,
2
⎪
⎪
⎪
MZ2
⎪
1
0
2
0
2
2
⎪
⎪
⎩ 2 Q2 +M2 [xq (x, Q ) + xq̄ (x, Q )(1 − y) ] (NC)
(4.3)
Z
1A
naive calculation, see e.g. [4], gives a factor of 3, but when the structure functions of the
nucleon are taken into account this amounts to a factor of 2.
77
where x = Q2 /2Mκ and y = κ/Eν are the Bjorken scaling variables2 [103], −Q2
is the invariant momentum transfer between the incident neutrino and outgoing
lepton, κ is the energy loss in the laboratory frame, MW/Z are the masses of
the weak gauge bosons respectively, and G F is the Fermi constant. Due to
the universality of the weak interaction, these cross-sections only depend on
kinematics and the quark distribution functions q, q̄, q0 , and q̄0 . See [103] for
more details.
The cross-sections at Eν ∼ 1 TeV − 1 EeV are basically extensions of the
ones for lower energies. The main difference is that above about 10 TeV
the propagation term is no longer dominated by the mass of the exchange
bosons W ± and Z 0 , leading to a suppression of the cross-section caused by the
1/Q2 term [101]. Further, as interactions with the quark-antiquark symmetric
quark-sea become more important, the suppression of anti-neutrino-quark interactions relative to neutrino-quark interactions become less pronounced, see
figure 4.1.
At Eν 106 GeV, DIS cross-sections are successfully described in the framework of pQCD (pertubative-QCD), and parameterizations of the nucleon structure functions can be chosen according to e.g. CTEQ5 [102, 104]. But as the
neutrino interactions with the quark-sea become more important, the uncertainties in these models increase. At the highest scale, figure 4.2 shows two
models for extrapolating the structure functions: ‘pQCD’ (smooth power-law
extrapolation) and ‘HP’ (Hard Pomeron) enhanced extrapolation, see [102]
and references therein.
4.2 Cherenkov Radiation
When a charged particle traverses a dielectric medium such as ice, photons
are produced as a result of polarization of the medium. As this polarization
relaxes back to equilibrium, photons are emitted, creating a burst of light.
If the particle travels faster than the speed of light in the medium, c/n(λ),
constructive interference occurs at an angle θc . This disturbance radiates as a
coherent electromagnetic shock wave, similar to the sonic boom produced by
an aircraft in supersonic flights. See details in e.g. [105].
We can derive a minimum threshold energy, Eth , the Cherenkov threshold,
below which the particle does not emit coherent Cherenkov light:
Eth = mc2
mc2
,
=
1 − β2
1 − n−2 (λ)
(4.4)
where m is the rest mass of the particle, β = v/c where v is the particle’s velocity, λ the photon wavelength, and n(λ) the wavelength dependent refractive
2x
can be interpreted as the fraction of the nucleon’s momentum that is carried by the target
quark, while y is the fraction of the lepton’s energy that is transferred to the nucleon rest frame
[2].
78
θc
ct/n
vt
ct/n
vt
Figure 4.3. Illustration of the Cherenkov effect. The left illustration shows the spherical wavefronts for a particle with β = 0.5, where a slight concentration along the
direction of travel can be seen. The right illustration shows a particle with β = 1,
where the waves interfere constructively to produce the characteristic Cherenkov cone
of light. Figure taken from [106].
index of the medium. This gives a threshold of about 0.8 MeV for electrons,
160 MeV for muons, and 2.7 GeV for taus assuming n = 1.33 [106] which corresponds to the refractive index of ice in the range 300-600 nm where IceCube
Photo Multiplier Tubes (PMTs) are sensitive [107].
The Cherenkov angle θc , the angle of the coherent wavefront relative to the
direction of the velocity of the charged particle, is (see figure 4.3):
cos θc =
1
,
n(λ)β
(4.5)
where θc ≈ 41.2◦ for β = 1 and n = 1.33. Note that the observable photons are
not emitted exactly perpendicular to this shock front since they move with the
dn
where n is
group velocity governed by the group refractive index ng = n − λ dλ
the phase refractive index mentioned above. Taking this into account has been
shown to have a small effect [108].
The number of Cherenkov photons emitted per unit length x and wavelength
λ can be estimated from [105]:
2πα
1
d2 N
= 2 1− 2 2
,
(4.6)
dxdλ
λ
β n (λ)
where α ≈ 1/137 is the fine structure constant. Integrating equation 4.6 in the
range where IceCube PMTs are sensitive yields about 330 Cherenkov photons
79
Stopping power [MeV cm2/g]
μ+ on Cu
μ
100
Bethe
Radiative
10
LindhardScharff
AndersonZiegler
Radiative
effects
reach 1%
Eμc
Radiative
losses
Minimum
ionization
Nuclear
losses
Without 1
0.001
0.1
0.01
0.1
1
10
1
10
100
1
[MeV/c]
4
100
1000
10
10
100
1
[GeV/c]
Muon momentum
10
5
10
100
[TeV/c]
Figure 4.4. Stopping power of muons in copper. At GeV energies the energy loss is
dominated by ionization losses and is nearly constant. Above TeV energies, stochastic
radiative losses dominate and the energy loss is approximately proportional to the
muon energy. Figure from [2].
per cm of relativistic charged-particle track. The total number of photons resulting from the energy deposition of an entire event is hence largely a function
of the total track length. Note further, that the continuous energy loss of the
muon and its secondary products from the Cherenkov effect has a negligible
contribution to the energy losses we will discuss in the next section. Assuming
the loss is due entirely to the emission of ∼ 330 photons per cm, the estimate is
O(100) keV/m. This should be compared with e.g., the energy loss by a muon
in the so-called minimizing region of O(200) MeV/m.
4.3 Energy Loss Mechanisms
Energy loss in matter is due to a number of mechanisms of which the two
most important, in the energy range of neutrino telescopes, is ionization and
radiative losses, see e.g. figure 4.4. The former involves ionization of atoms,
many interactions but with a small energy transfer each, continuously along
the trajectory. Radiative losses can be divided into: e+ e− pair production,
Bremsstrahlung, and inelastic photonuclear interactions. These are generally
stochastic in nature with significant losses in single interactions.
Figure 4.4 shows the stopping power of muons (μ+ ) in Copper (Cu) as a
function of particle momentum. At energies O(GeV) the stopping power is
dominated by ionization losses described by the Bethe-Bloch formula and is
80
Median range [km ice]
102
101
Eμ,min
100
0
1 TeV
10−1
102
103
104
105
106
107
108
109
Muon energy [GeV]
Figure 4.5. Median range of muons in ice as a function of energy. The solid line shows
the median range, while the dotted line shows the range where 50% of the muons at
least have a kinetic energy above 1 TeV. Figure from [106].
close to constant, see equation 4.7. This is the so-called minimum ionizing
region that manifests itself as a continuous energy loss along the track. The
Bethe-Bloch equation describes the mean rate of energy loss of moderately
relativistic, charged heavy particles [2]:
dE
(4.7)
∝ 1/β2 · ln const · β2 γ2 ,
−
dx
where β = v/c (v is the velocity of the particle) and γ = 1/ 1 − β2 .
The energy loss of TeV muons is instead dominated by radiative losses approximately proportional to the muon energy. We can parametrize this energy
loss as [109]:
dEμ
(4.8)
= a + bEμ ,
−
dx
where a ≈ 0.26 GeV mwe−1 (meters of water equivalent) correspond to the
ionization losses and b ≈ 3.6 · 10−4 mwe−1 corresponds to stochastic radiative
losses for ice [110].
Figure 4.5 shows the median range of muons in ice as a function of energy calculated with PROPOSAL (PRopagator with Optimal Precision and
Optimized Speed for All Leptons) [106, 111]. For energies relevant in this
analysis, 100 GeV to a few TeV, the muon range goes from a few hundred
meters up to about 10 kilometers.
Electrons lose energy through the same mechanisms as muons, but due to
their relatively small mass, the average fractional energy transfer is much
81
Figure 4.6. Illustration of the event topologies in IceCube. The left figure shows a
muon-neutrino interacting inside the instrumented detector volume creating a charged
muon, a so-called track event, as well as an incoming muon, either atmospheric or
neutrino-induced, coming in from the right, also producing a track-like event in the
detector. The right figure shows an electron antineutrino interacting inside the detector
giving rise to a cascade-like event with a leptonic and hadronic shower component.
Cascades are also produced by short-lived taus produced in tau-neutrino interactions.
larger. Bremsstrahlung starts to dominate already at ∼ 100 MeV [109] in
water, while the corresponding number for muons is ∼ 1 TeV [109]. The
Bremsstrahlung photons will themselves create e+ e− pairs that can loose energy through the same process. This gives rise to a so-called electromagnetic
cascade that continues until the photon drops below the pair production threshold.
4.4 Event Topologies
The different final state leptons produced in CC and NC interactions give rise
to three distinct light-patterns in the detector.
All neutrino flavors give rise to an initial hadronic shower with an implicit
electromagnetic component from neutral pion decays. For CC interactions, a
82
charged lepton is created with matching lepton flavor. Muons can travel kilometers in ice without stopping giving rise to track-like light-patterns, so-called
track topologies. The long lever arm is ideal for determination of direction,
hence these events are suitable for point-source analyses and are the type of
event considered in this thesis.
Electrons only travel a short distance in the detector, giving rise to spherically shaped light-patterns, so-called cascades. Tau leptons are very heavy
and further decay fast. In 83% [2] of the cases they produce either an electromagnetic or hadronic cascade, while in 17% [2] of the cases they produce
a muon giving rise to a track. For sufficiently high energies, a tau can travel
long enough to give rise to a so-called double-bang topology: two spatially
separated cascades3 .
At low energies, the difference between the topologies becomes less clear.
Further all NC interactions produce hadronic cascades induced by the break up
of the nucleon involved. Figure 4.6 illustrates the different topologies, the left
figure showing both a starting and incoming muon track event, while the right
figure show a typical cascade event, in this case illustrated by an incoming
electron antineutrino interacting inside the detector.
Summary
In this chapter we reviewed topics important for neutrino detection, in particular for the detection of neutrinos in large-scale neutrino telescopes such as
IceCube. Next, we go on to discuss the actual specifics needed based on a simple estimation of the level of the astrophysical neutrino flux. We also cover, in
depth, the IceCube Neutrino Observatory including its main constituent, the
Digital Optical Module (DOM).
3 While
a 10 TeV tau travels less than 1 m, a 1 PeV tau can travel ∼ 50 m before decaying [2].
83
5. The IceCube Neutrino Observatory
“We are seeing these cosmic neutrinos for the first time.”
Francis Halzen, 2013
Why do we need cubic-kilometer scale neutrino telescopes? Predictions of
astrophysical neutrino fluxes are available for many source. As a benchmark
point source flux for a Galactic source we use:
E2ν dΦν /dEν ∼ 1 × 10−11 TeV cm−2 s−1 .
(5.1)
The total number of neutrinos, Nd , for a detector livetime T, is given by
convolving the flux with the detector acceptance, and is evaluated for a given
source declination θ:
dΦν (Eν )
Nd = T ·
dEν ·
· Aeff (Eν , θ),
(5.2)
dEν
Eν
where Aeff is the so-called effective area of the neutrino telescope. Technically it is the equivalent geometric area over which the detector is 100% effective for passing neutrinos. Essentially we can think about effective area as a
parametrization of the detector performance. It is a particularly valuable quantity when comparing the predicted outcome and sensitivity of different experiments for similar physics. Note however, that it does not say anything about
the purity and reconstruction quality of an event sample. The flux dΦν /dEν is
given in units GeV−1 cm−2 s−1 .
The effective area for muon-neutrinos includes the probability that the neutrino survive the propagation through the Earth, interacts close to the detector
volume, and induces a secondary muon with enough light to trigger the detector [29]. Following the recipe in [29] it can be parametrized as:
−σ(Eν )ρNA Z(θ)
,
Aeff (Eν , θ) = A · Pν→μ (Eν , Ethr
μ )· ·e
where A is the geometrically projected surface area of the detector, and corresponds to fraction of muons with energy Ethr
μ that are detected including
triggering, successful reconstruction, and event selection cuts. The exponential factor accounts for the absorption of neutrinos along a given path Z(θ)
85
Figure 5.1. The IceCube Lab located at the surface, in the center of the IceTop air
shower array. The bottom part of the picture includes an illustration of a muon event
lighting up the optical modules of the in-ice array with Cherenkov photons as it traverses the detector. Credit: IceCube Collaboration.
in the Earth and starts to become important for energies above ∼ 50 TeV
[112]. σ(Eν ) is the total neutrino cross-section, ρ is the target density, and
NA ≈ 6.022 × 1023 mol−1 is Avogadro’s constant [2]. Pν→μ is the so-called
muon probability, the probability that a neutrino with energy Eν will produce
a muon with at least energy Ethr
μ on a trajectory through the detector. This
quantity is a convolution of the differential neutrino cross-section with the
muon range in ice and can be approximated as ∼ 1.3 × 10−6 E0.8
ν for energies
3
in the range between 1 − 10 TeV [84].
Equation 5.2 can be solved analytically given the benchmark flux of neutrinos from equation 5.1, in the range 1 − 103 TeV, assuming A = 1 km2 and
= 0.1 (i.e., constant). This gives us roughly 1.5 events per year [29].
By instead using the published effective area for IceCube in the Northern
Hemisphere (NH) [113], we get ∼ 13 [29] events per year for an unbroken
E−2
ν -spectrum on the level of the benchmark flux. If we assume a source model
with a spectrum identical in shape to that measured by HESS for the Southern
hemisphere (SH) source RX J1713.7-3946 [29], we get about 2.7 [29] events
per year using the same effective area from [113] and assuming Φνμ 1/2 Φγ ,
86
where Φγ is given by [29]:
E2γ
dΦγ
1/2
= Φ0γ e(−Eγ /Ec ) ,
dE
(5.3)
where Φ0γ is the benchmark flux from equation 5.1 and Ec = 3.7 TeV.
Hence we really need cubic-kilometer sized detectors and very long observation time, preferably both, to detect a significant amount of astrophysical
neutrinos, in particular if they originate from source in the SH where the effective area in general is lower due to the huge background of atmospheric
muons. To be practical, large volumes of detector material have to be natural
occurring in forms that are possible to exploit. These include the transparent
materials: air, ice, and water. The natural choice for the detection technique is
to use Cherenkov light detection with PMTs.
At present there are three large neutrino telescopes in the world that uses
either ice or water as the detector medium. The largest by far is the cubickilometer-sized (∼ 109 m3 ) IceCube, located in the Southern hemisphere (South
Pole, Antartica). ANTARES (Astronomy with a Neutrino Telescope and Abyss
environmental Research project) is located in the Northern hemisphere (in the
Mediterranean Sea, off the coast of Toulon, France) and consists of 12 strings
of optical modules, a total of ∼ 900 PMTs. It has an instrumented volume
∼ 107 m3 and was completed in 2008. ANTARES provide the best limits in
the SH at low energies and is a predecessor for the larger Cubic Kilometer
Neutrino Telescope (KM3NeT). The Baikal Deep Underwater Neutrino Telescope (BDUNT) is located in Lake Baikal, Russia. This observatory is under
expansion and the new Baikal Gigaton Volume Detector (Baikal-GVD) will
have a projected total volume of ∼ 5 · 108 m3 when completed in 2020 [114].
The construction of IceCube began in 2004 and was completed in 2010.
Due to the weather conditions on Antarctica, the IceCube detector deployment work was only possible during a couple of months in the austral summer.
IceCube is located roughly one kilometer away from the Amundsen-Scott station and the geographical South Pole. It is a multi-use observatory with a large
range of scientific goals, a primary goal is to improve our understanding of the
origin of CRs.
The IceCube observatory includes the main instrument IceCube, a cubickilometer in-ice neutrino detector with a dense central subarray called DeepCore, a surface air-shower array called IceTop, and the IceCube Lab (ICL) at
the glacial surface to which all cables from both IceCube and IceTop are connected. The ICL is depicted in figure 5.1 including an illustration of a muonlike event in the in-ice detector. Further, figure 5.2 shows the whole facility including also The Antarctic Muon And Neutrino Detector Array (AMANDA)II, the predecessor of IceCube. These are all shown in relation to the Eiffel
Tower, illustrating their enormous size.
IceTop consists of 162 large tanks with purified water frozen to clear ice in a
controlled process to avoid bubbles formation. Each tank contains two optical
87
Figure 5.2. The IceCube Neutrino Observatory. The in-ice neutrino detector
(‘IceCube array’), the denser subarray DeepCore, the surface air-shower array IceTop, and IceCube Lab centered in the middle of the IceTop array, are all indicated.
Also shown is the precursor to IceCube, the AMANDA-II array which was decommissioned in 2009. Credit: IceCube Collaboration.
sensors each, one at high-gain and one at low-gain. IceTop detects atmospheric
showers through signals induced by muons, electrons, and photons. These
emit Cherenkov radiation as they pass through the tank. IceTop is used for
calibration, veto, and CR studies, above a primary energy of ∼ 102 TeV. One
of the key measurements so far is that of the anisotropy of the arrival directions
of CRs discussed in chapter 3. In the analysis described in chapter 10, we use
IceTop as a coincidence shield for in-ice events.
5.1 The IceCube In-Ice Detector
IceCube instruments a cubic-kilometer of deep glacial ice and is the world’s
largest neutrino telescope. It is located under the ice cap at the geographic
South Pole, Antarctica. It consists of 5160 optical sensors arranged in a threedimensional grid between 1450 and 2450 m beneath the surface [107], see
figure 5.2. The ultra-transparent ice at such depths is ideal for observations of
Cherenkov light from charged particles.
The in-ice array is sensitive to energies above 100 GeV [115], hence covering everything between atmospheric neutrinos and GZK neutrinos. However,
88
at the highest energies, the flux is so low that no or at most very few events are
expected. Further, the effect of absorption of neutrinos at UHE energies limits
the possibilities to detect EeV neutrino events.
The denser DeepCore subarray enables searches down to 10 GeV [116],
important in particular for dark matter and neutrino oscillation analyses. The
lower energy bound of IceCube can be lowered further for searches of bursts
of neutrinos from supernova explosions, giving rise to a collective increase in
the photoelectron (p.e.) count rate.
The detector was constructed using a hot water drill making 2.5 km deep
holes, about 60 cm wide into which optical modules, so-called DOMs, were
lowered on a total of 86 strings [117]. To penetrate the first 50 m of compact snow, a so-called firn drill was used since this low density layer does not
hold water [117]. Each string is supported by a cable that provide power and
communication with the Data Acquisition System (DAQ) on the surface. For
details concerning the construction of IceCube, see [107, 117] and references
therein.
The DOMs constitute the fundamental building blocks of IceCube [118].
Each of the 5160 DOMs operates as an individual detector and were designed
to operate for at least 15 years [107]. Out of the 86 strings, 78 are arranged
on a hexagonal grid with an average string spacing of 125 m [119]. These
constitute so-called standard IceCube strings. Each of these strings hosts 60
DOMs evenly spaced over a length of 1 kilometer, with an average DOM-toDOM spacing of about 17 m [117].
The remaining 8 strings also host 60 DOMs each distributed over two regions, one at the bottom of the detector consisting of 50 DOMs with a DOMto-DOM spacing of about 7 m, and one slightly above the so-called dust layer
discussed in section 5.2, consisting of 10 DOMs with a DOM-to-DOM spacing of about 10 m [116]. The latter can be used as a veto against down-going
muons penetrating towards the veto. These strings were inserted between the
standard strings at the center of IceCube and have an average string spacing
of ∼ 55 m. They define, together with the 12 surrounding standard strings, the
DeepCore subarray1,2 .
5.1.1 Coordinate System
The coordinate system used has its origin close to the geometrical center of
the in-ice detector. The z-axis points to the surface and z = 0 corresponds to
a depth of 1946 m. The y-axis points toward longitude 0◦ and the x-axis lies
perpendicular to the plane spanned by the y- and z-axis, altogether defining a
1 The actual number of surrounding standard strings included in analyses dedicated to DeepCore,
depends on the application. Typically, either 1, 2, or 3 layers of the closest surrounding strings
are included.
2 Note that most of the DOMs used for these 8 additional strings have PMTs with a higher
quantum efficiency [116].
89
z
Surface
Source
θ
Longitude 0◦
φ
y
x
Figure 5.3. Coordinate system.
right-handed system. To indicate a source position in the sky we use spherical
coordinates θ ∈ [0, π] (zenith) defined from the direction of the z-axis and φ ∈
[0, 2π) (azimuth) defined counter-clockwise starting from the x-axis, see figure
5.3.
5.2 Optical Properties of the Ice at the South Pole
To detect the faint Cherenkov light the detector medium needs to be highly
transparent and in a dark environment. For this purpose IceCube uses the
ultra-transparent glacial ice deep under the surface of the South Pole. The ice
consists of layers with varying age, including traces from climatic changes and
geological events. The ice just above the bedrock at a depth of about 2800 m
is close to 165,000 years old [122].
The ice properties are observed to vary throughout the detector and the
overall picture is complex including dust layers, air bubbles, anisotropies, and
an overall tilt [121]. To accurately reconstruct the light patterns we need a
deep understanding of the medium in which the light propagates.
The ice is studied by using Light Emitting Diodes (LEDs) from so-called
flasher-boards in the DOMs, see section 5.3. Ice properties were also studied
during construction: a so-called dust logger, a combination of camera equipment and lasers was lowered into a few holes, scanning the ice prior to the
deployment of strings. Further, a permanent camera, the so-called Sweden
90
Figure 5.4. The effective scattering length λe and absorption length λa (see y-axis on
right-hand side) as a function of depth for a wavelength of 400 nm. The gray area
represents the tolerated range in a global fit to in-situ data while the black solid line
shows the SPICE-Mie [120] ice model, the benchmark model used in this thesis. The
AHA model shown is a previous ice model based on separate fits to data for individual
pairs of emitters and receivers [121]. Credit: Figure 16 from reference [120].
91
camera, was deployed at the bottom of string 80 at the center of the array.
The results revealed the presence of a central core of ice with air bubbles in
the hole, over roughly 1/3 of its diameter. This so-called hole ice affects the
angular sensitivity of the DOMs, making it more isotropic, due to the short
scattering length.
The Cherenkov light will be scattered and/or absorbed as it propagates
through the ice. In particular, scattered light will arrive later to the DOMs.
We define an absorption length λa corresponding to the distance where only
a fraction 1/e of the photons remain. We also define a scattering length λ s
corresponding to the average distance between scatters. To a first order these
are considered as functions of depth and wavelength.
When the glacier is formed from compressed snow, air is entrapped in bubbles. For ice that has reached depths of 1,300 m [121], the air is under higher
pressure and diffuse into the surrounding ice, forming non-scattering air hydrate crystals, so-called clathrates [121]. Hence, in the region where IceCube
is located we are mainly concerned with dust particles, pre-dominantly mineral dust but also salt, acids, and volcanic soot.
We typically record light that has been scattered several times. Further,
these scatter-centers in general lead to an anisotropic distribution of the scattered light with a strong preference in the forward direction [121]. Following
the definition in [121], we define an effective scattering length λe :
λe =
λs
,
1− < cos θ >
(5.4)
in terms of the average cosine of the angle θ for single scatters, < cos θ >≈ 0.94
(representing an average over relevant wavelengths) [121]. Similarly we define the effective scattering coefficient be = 1/λe and the absorption coefficient
a = 1/λa .
The top (bottom) plot of figure 5.4 shows the effective scattering length
λe (the absorption length λa ) as a function of depth for a wavelength of 400
nm. On the opposite axes we show the corresponding coefficient. The figure
shows the presence of several large dust peaks corresponding to cold periods
from the last glacial period in the late Pleistocene [122]. In particular, the large
dust peak in the middle of the detector correlates to an interglacial period about
65,000 years ago [122].
In this main dust layer, the absorption length is as a small as 20 m, while
on average it is closer to 100 m (200 m) in the shallower (deeper) parts of the
detector. Further the effective scattering length above the main dust layer is
about 25 m while below, where the ice becomes clearer, it is as long as 70 m.
For the analysis presented in this thesis we use the SPICE-MIE ice model
[120]. It is shown in figure 5.4 as a black solid line and corresponds to a
global fit to data from in-situ light sources as described in the beginning of
this section. The tolerated range of the global fit is illustrated in the gray area.
92
Figure 5.5. The Digital Optical Module (DOM) can be considered the smallest building block of IceCube and is essentially autonomous. It includes a PMT, digitizer
electronics, and calibration LEDs. The DOMs are deployed on 86 strings, with 60
DOMs each, and are lowered into the South Pole ice down kilometer-long holes made
using a hot water drill. Credit: Figure 2 from reference [118].
The AHA model shown in black dashed lines is a previous ice model based on
separate fits to data for individual pairs of emitters and receivers [121].
The parameters used in the SPICE-MIE ice model are be and a, both evaluated at 400 nm, the temperature δτ for each 10 m layer of ice plus additional
parameters [121]. The column of re-frozen ice in the drill hole is modeled by
including a modification of the effective angular sensitivity of the DOMs as
mentioned above.
5.3 The Digital Optical Module
Each DOM in IceCube consists of a downwards oriented PMT, digitization
electronics (the DOM mainboard), and calibration LEDs. All is housed in a
pressure-resistant 13 mm thick glass sphere, see figure 5.5. The glass sphere
is sealed with dry nitrogen gas at 0.5 atm and can withstand a pressure of 70
MPa [119]. The material is transparent in the operating range of the PMTs,
between 300 nm to 650 nm, with a maximum transparency at 410 nm. The
PMT is supported and optically coupled to the glass sphere by a flexible silicon gel [119]. A penetrator links the interior of the DOM to the cable outside
and a so-called mu-metal grid partly shields the PMT against the Earth’s magnetic field [118]. A single twisted-pair of ∼ 1 mm copper conductor carries
communications, power, and timing signals to the DOMs from the surface.
93
Once the modules are frozen in, they are operated remotely from the ICL on
the surface. Each DOM is an autonomous data acquisition module including
full digitization of the signals. IceCube uses a decentralized time-stamping
procedure, the waveforms in each DOM receives a local time-stamp from its
onboard 20 MHz oscillator. The local time is continuously monitored in relation to a Global Positioning System (GPS) reference time from a master clock
at the ICL. The timing resolution, following this procedure, is 2 ns [119].
Note that the limiting factor for reconstructions is dominated by the scattering
of photons in the ice.
The PMTs are of type HAMAMATSU R7081-02, approximately 25 cm
(10 inch) in diameter and contain 10 dynodes giving a total amplification factor of about 107 at the operating voltage3 , hence enabling detection of single
photons. The quantum efficiency, the probability that a photon striking the
photocathode will results in the ejection an electron, is about 25% at 390 nm
[119]. The DOMs used for the DeepCore array and a few standard strings have
PMTs with a higher quantum efficiency (R7081MOD), about 35% higher than
for standard IceCube PMTs [116]. These are denoted HQ-DOMs.
The PMT response is affected by both pre-pulses (∼ 1% of the Single Photoelectron (SPE) rate [119]), either due to electrons taking shortcuts in the
multiplication (avalanche) region or photo-production on the first dynode, late
pulses due to inelastic scattering on the first dynode, and so-called after-pulses
associated with ionization of residual gas [119]4 . Additionally, PMT dark
noise is caused by thermal electrons evaporating from the photocathode and/or
individual dynodes, ∼ 300 Hz [119]. Further, by including also the noise from
radioactive decays in the glass housing, the total noise rate is about ∼ 650 Hz5
[107]. These features are all simulated in the Monte Carlo (MC), see chapter
6.
The analog signals from the PMTs are digitized if they are above a discriminator threshold representing 25% of the average signal of one photoelectron.
When two neighboring or next-to-neighboring DOMs on the same string both
cross the threshold within a 1 μs time window, the signals qualify as a Hard
Local Coincidence (HLC). Conversely hit DOMs that do not qualify are defined as having a Soft Local Coincidence (SLC). The HLC are used in the
trigger criterions, see section 5.4, to reduce the influence of noise from individual DOMs on the trigger rate.
To cover a large time range of signals from the PMT we use two separate
digitizing chains including both a fast Analog-to-Digital Converter (fADC)
and two Analog Transient Waveform Digitizers (ATWDs). The ATWDs are
working in so-called ping-pong mode to avoid dead-time. This means that one
3 The voltage is set individually for each DOM and tuned to provide the same gain.
It is typically
in the range 1100-1500 V.
4 Late pulses give rise to photo-electrons delayed tens of ns, while after pulses are typically
delayed several μs.
5 High-Quantum Efficiency DOMs has a total noise rate of about 900 Hz [107].
94
Figure 5.6. Bird’s eye view of IceCube showing the position of all 86 strings. Gray
filled circles indicate standard IceCube strings, while white filled circles indicate
strings with HQ-DOMs. Strings marked with a black border constitute the so-called
DeepCore fiducial volume used for one of the primary triggers in this analysis. The
gray outer area indicates the main veto region as defined in the analysis presented in
this thesis. Credit: Illustration by Henric Taavola, modified by including a veto layer,
shown in gray.
unit is always available to capture signal while the other might be digitizing a
previous signal.
The ATWDs cover a large dynamical range and have three digitizing channels each, with amplitudes: 16x, 2x and 0.25x6 . They collect 128 samples at
a rate of 300 MHz resulting in a bin width of 3.3 ns over totally 422 ns [118].
Further, the ATWD chip has a fourth channel used for calibration and monitoring [118]. The fADC collects 256 samples at a rate of 40 MHz resulting in
a bin width of 25 ns over totally 6.4 μs, hence a longer window particularly
useful for bright events.
The DOM mainboard contains a so-called flasherboard with 12 evenly
spaced LEDs peaking around 410 nm that are directed outwards at different
angles (6 are directed horizontally and 6 are tilted upwards at an angle of 48
degrees. These are used to calibrate the detector and for the in-situ calibrations mentioned in section 5.2. Some of the DOMs have LEDs with different
6 The
amplitude of the waveform decides what channels should be used.
95
colors, with wavelengths 340 nm, 370 nm, 450 nm, and 505 nm. These are
used to obtain a more detailed understanding of the wavelength dependence
of the optical properties of the ice. See [119] and [107] and references therein
for more details on the calibration of IceCube.
5.4 Data Acquisition System and Triggering
Since signals are digitized in the DOM and only digital information is sent to
ICL, the following parts of the DAQ is based on processors and software. Each
string is connected to a so-called DOMHub at ICL that controls the power and
physics data transfer through so-called DOM Readout (DOR) cards7 .
Digitized signals are sent from each DOM with signal above the discriminator threshold to the surface in data packages called DOMLaunches. They
consist of locally generated timestamps, coarse fADC info in case of SLC and
the full fADC and available ATWD waveforms in case of HLC [118]. The
coarse fADC contains the amplitude of the highest bin and two neighboring
bins together with a timestamp [118].
To reduce the total noise rate from the DOMs we apply a coincidence trigger
criterion, based on the information in the DOMLaunches. The leading trigger
condition, Simple Majority Trigger (SMT)-8, is formed when at least 8 DOMs
has HLCs pulses within a 5 μs time window. In the analysis presented in this
thesis we also use events from a trigger dedicated to catching almost vertical
events, the so-called StringTrigger, demanding HLC from 5 out of 7 adjacent
DOMs on the same string within 1 μs. Further, we use a trigger with a lower
effective energy threshold active in the so-called DeepCore fiducial volume. It
consists of the lowest 50 DOMs on the 6 DeepCore strings as well as the lowest
22 DOMs on the 14 surrounding standard strings, and requires 3 HLC DOMs
within 2.5 μs [116]. The strings involved are marked with black borders in
figure 5.6.
Each trigger in the system is considered globally when forming the socalled event trigger hierarchy. This consists of all DOMLaunches within the
time window defined by merging all overlapping triggers plus an additional
10 μs readout window. The total trigger rate considered is ∼ 2, 500 Hz, but is
influenced by atmospheric conditions and seasonal variations, an effect of ∼
10%. The online filtering system, discussed in section 5.5, reduces the overall
rate by about a factor of 10.
7 Control
of DOM operation, firmware updates, time synchronization, restarts, High Voltage
(HV) calibration, flashed operation, etc., are also handled by the DOMHubs over the DOR
cards.
96
5.5 Processing and Filtering
Each IceCube event is processed in the ICL, so-called online processing, where
the raw waveforms are calibrated, baselines are subtracted, and so-called droop
correction is made8 . This is done individually for each DOM using baselines
measured in in-situ calibration data.
SPE pulses are extracted from the waveforms using an iterative unfolding algorithm with pre-defined templates measured in the laboratory. The
extracted pulses consists of the leading edge time9 and charge, and define a
so-called pulse-series for each event.
A hit in IceCube can be referred to as either a DOMLaunch, i.e., when a
DOM detects one or more photons that cross the discriminator threshold, or
a pulse extracted from the waveform using the feature extraction mentioned
above. The definition used for a particular case will be made clear from the
context.
The SLC pulses are particularly useful to improve the reconstruction of
low-energy muon tracks by increasing the number of hits and the lever arm
between first and last hit in time, and to get an additional efficiency on vetoing
incoming background tracks. However, the majority of the SLC pulses are
due to pure noise. Therefore, the pulse-series needs thorough cleaning before
they can be used for reconstruction of event variables. The different cleaning
algorithms are described in detail in the beginning of chapter 7. Only keeping
HLC pulses removes almost 30% of the hits connected to particles traversing
the detector.
First reconstructions of direction and energy are done online and made
available for the filtering scheme that defines the samples used for analyses.
The filtering used for the data presented in this thesis is covered in depth in
chapter 8. Data that pass at least one filter is sent north via satellite for further processing (so-called offline processing). This is typically customized for
each analysis. The offline processing and events selection for the sample used
in this thesis is presented in chapter 10.
Summary
In this chapter we introduced the detectors used to collect data for the analysis
presented in this thesis. The IceCube in-ice detector is the world’s largest neutrino detector and consists of over 5,000 optical modules collecting Cherenkov
radiation in the ice. In the next chapter we will discuss the event simulations
used to define the astrophysical signal.
8 The tail of the waveform undershoot due to saturation in a transformer between the HV supply
and the PMT. This only effected the first part of the DOMs, the transformer was exchanged in
the latter part of the DOM production.
9 Defined as the intersection between the baseline and the tangent of the segment of fastest
ascent.
97
6. Event Simulation
“If a little dreaming is dangerous, the cure for it is not to dream less
but to dream more, to dream all the time.”
Marcel Proust
Monte Carlo (MC) simulations are an essential tool to verify the detector response and efficiency. Further, in many analyses, e.g., searches for a
diffuse flux of astrophysical neutrinos, MC is used to determine the level of
background and is hence critically important. In the analysis presented in this
thesis we search for a clustering of neutrino events and use scrambled experimental data as background. However, a simulation of the background is still
important to verify the simulation chain since large parts of it is also used for
the signal simulation of astrophysical neutrinos. In each step of the analysis, we carefully investigate the agreement between the measured data and the
simulated background. In particular, we avoid using variables or regions of
variables where large discrepancies are found.
In this chapter we present the simulation chain including event generation,
propagation, and detector response. Further, we discuss how to weight generated events into a desired spectrum which is different from the generated
spectrum.
6.1 The IceCube Simulation Chain
The simulation chain used in IceCube is encoded in the software framework
IceSim that mainly consists of three parts: particle generation, propagation,
and detector simulation (detection), see figure 6.1. Starting from the top of the
figure, following the arrow, events of a specified type are generated with an energy, direction, and position relative to the detector. We simulate atmospheric
muons and neutrinos resulting from air showers of primary CRs but also socalled signal neutrinos of astrophysical nature. Each event is injected in a
region close to IceCube to ensure that it has a chance to trigger the detector.
The primary and secondary particles are propagated through the Earth’s atmosphere, rock and ice including energy losses and further particle generation
99
!
"
#$ "
"
!
"
Figure 6.1. IceCube simulation chain. Following the arrow, we start by generating
particle of a given type, energy, direction, and position relative to the detector. The
primaries and their secondary particles are propagated through the atmosphere, matter
and ice including energy losses and further particle production. The final step includes
the detector response with hit reconstruction, PMT simulation, etc.
from interactions, see section 4.3. Particles that travel faster than the speed of
light in the ice give rise to Cherenkov photons, see section 4.2. These are also
propagated. The final step includes the full detector response and is simulated
for each event. Note that we only keep triggered events.
Particle generation and detection is optimized to result in large statistics in
a given desired energy interval, i.e., the simulations are biased to oversample
interesting events. To be able to change the generation spectrum a posteriori
each event is given a weight, a so-called OneWeight, that takes all simulation
steps into account. Using these weights, the sample can be re-weighted to
correspond to any desired particle flux. This is very convenient, since it allows
generic simulations, useful for many different analyses, to be made with very
large statistics. For more details, see section 6.2.
100
6.1.1 Event Generation
For this analysis we used two different event generators: CORSIKA (COsmic Ray SImulations for KAscade) [123] to generate extensive air showers
and NuGen (Neutrino-Generator) [102] to generate neutrinos including their
propagation through Earth. Both are generated using IceCube simulation software1 , and are further described in the next set of paragraphs.
Atmospheric Muon Background
Downgoing atmospheric muons created in extensive air showers constitute the
largest background for this analysis. These can enter through the veto regions
of the detector without leaving a trace and hence effectively mimic neutrinoinduced muons truly starting inside the instrumented volume.
Primary nuclei in the range from protons to iron are injected at the top of
the Earth’s atmosphere creating air showers as a result of interactions with air
nuclei. Each of these particles is tracked in the CORSIKA framework that includes both single and coincident air showers as well as hadronic interactions
described by SIBYLL (an event generator for simulation of high-energy CR
cascades) [124]. Secondary particles are tracked through the atmosphere until
they interact with air nuclei, decay in the case of unstable secondaries, or go
below a pre-set energy threshold. Since muons are the only charged particle
that can reach the in-ice detector the rest of the shower is manually stopped as
it reaches the surface.
In figure 6.2 we show the results of CORSIKA simulations for various primary nuclei and energies. These plots are taken from [125]. The vertical
axis represents the first interaction height at 30 km (top) and surface at 0 km
(bottom).
The events used in this thesis are generated with a so-called 5-component
model where 5 different primary nuclei (H, He, N, Al, and Fe) are injected
according to specified individual generation spectra, see table 10.2 for details.
For the analysis, we re-weight these events to a polygonato model according
to Hörandel [126] where the primary CR flux is treated as a superposition of
fluxes from protons to uranium nuclei, and the individual cutoffs at the knee
are rigidity-dependent, resulting in ∼ E −2.7 . Note that this model does not
include an extragalactic component and hence does not describe UHE CR.
Neutrinos
Neutrinos are generated with NuGen that is based on the ANIS (All Neutrino
Interaction Simulation) event generator [102]. The events are injected at the
surface of the Earth pointing towards the detector with an isotropic distribution as seen from IceCube (isotropic in local coordinates cos θ and φ). The
neutrinos are propagated through the Earth considering both CC and NC interactions and assuming the Preliminary Reference Earth Model (PREM) [127].
1 Software
release V03-03-02
101
(a) Proton 1 TeV
(b) Proton 10 TeV
(c) Iron 1 TeV
(d) Iron 10 TeV
Figure 6.2. Air-showers simulated using CORSIKA configured with different primary
nuclei and energy. The vertical shower axis represents the first interaction height at
30 km (top) and surface at 0 km (bottom). The horizontal shower axis shows ±5 km
around the shower core. The plots only show tracks above 0.1 MeV for e− , e+ , and
γ (red lines), and above 0.1 GeV for muons (green lines) and hadrons (blue lines).
Credit: [125].
One flavor (νe , νμ , or ντ ) is chosen for the entire simulation run including a
50% probability for the particle to be ν or ν̄.
NuGen is configured with parton function Probability Density Functions
(PDFs) from CTEQ5 [104] using cross-section from [103], and in principle
assumes DIS only. Technically it can simulate neutrinos with energies in the
range from 10 GeV to 109 GeV, the region where DIS is dominant, see figure
4.2 and the discussion in section 4.1. Below 100 GeV, where other processes
become important, IceCube uses a generator called GENIE (Generates Events
for Neutrino Interaction Experiments) [128] that incorporates the dominant
scattering mechanisms from several MeV to several hundred GeV [128].
The events are generated with an E−2 spectrum. To arrive at a sample representing astrophysical neutrinos the events are re-weighted to various power
law spectra some including exponential cut-offs at TeV-scale. The spectra
considered are presented in detail in chapter 10.
The conventional flux component of the atmospheric neutrino background
is modeled by re-weighting the events to the Honda flux model2 [129]. The
prompt flux component is not modeled in this thesis. It is predicted to start to
dominate at ∼ 1 PeV (see section 3.4 and references therein) and does not need
to be considered in this analysis, focusing on energies ≤ 10 TeV. In fact, the
2 The
Honda model assumes an atmosphere that is a climatological average referred to as the
U.S. Standard Atmosphere 1976
102
plots in figure 10.23 clearly display a transition in experimental data between
an atmospheric muon and atmospheric neutrino component, the latter well
described by conventional atmospheric muon-neutrinos only.
6.1.2 Propagation
All relevant particles, primary and secondary, are propagated through the ice.
This includes both charged leptons and photons.
Lepton propagation
The propagation of charged particles through ice and rock is done using a software package called PROPOSAL (PRopagator with Optimal Precision and
Optimized Speed for All Leptons) which is a C++ re-implementation of the
java program MMC (Muon Monte Carlo) [110]. The software accounts for
both stochastic and continuous energy losses, see section 4.3. It uses a homogeneous ice model with properties from [2, 130]. For further details about the
parametrization of energy losses and structure of the code see [110].
Photon propagation
Photons are propagated using PPC (Photon Propagation Code) [131], a direct photon propagation code using Graphical Processing Units (GPUs) and
splined tables of local ice properties from the configured ice model, in this
analysis SPICE-Mie [120].
This direct propagation is very computer intensive and unpractical for the
creation of PDFs of the arrival times of photons to be used in the event reconstructions described in detail in chapter 7. For this purpose, we use look-up
tables created using a faster, Central Processing Unit (CPU) based, software
called Photonics [132]. It propagates photons from an injection point to the arrival at the DOMs using a statistical approach rather than following individual
photons. The Photonics tables consist of PDFs of the arrival times of photons
for different depths and various light sources, photons from starting, stopping,
and through-going muons, as well as from electromagnetic cascades. The actual number of expected photons used in the reconstructions is sampled from a
Poisson distribution with the mean corresponding to the tabulated values. Further, Photonics uses tabulated values of the local ice properties: the scattering
and absorption are parametrized as functions of wavelength and depth.
The general agreement between PPC and Photonics is good, in particular at
energies above ∼ 100 GeV, i.e. the agreement is good in the full energy range
considered in this analysis. For lower energies, linear interpolation between
bins leads to a systematic overestimation of the expected number of photons,
i.e., Photonics overestimates the light yield, in particular for light emitters far
from the receiving DOM.
103
6.1.3 Detector Response
The simulation of the detector’s response to Cherenkov photons includes several steps: PMT output, detector electronics, DOM logic and triggering. The
PMT simulation includes dark noise, pre-pulses and after-pulses generated
using a so-called noise-generator. Further, we also include thermal noise
according to a Poisson distribution (uncorrelated noise for each DOM). The
mainboard response is simulated including the digitizing steps of the ATWDs
and fADC.
Waveforms are generated in an inverse procedure compared to the unfolding done for the feature extraction, described in section 5.5. DOMLaunches
are formed and sent to the trigger simulation algorithm including the local coincidence logic assigning flags for HLC and SLC. Pulses are extracted from
the DOMLaunches using a feature extraction similar to the one used for experimental data.
DOM Efficiency
DOM efficiency is a collective term describing a number of local DOM properties related to the light collection efficiency, such as properties of local hole
ice, transmittance of the glass housing and optical gel, the quantum efficiency
of individual PMTs, and the PMT threshold. While some of these properties
can be calibrated in the laboratory prior to deployment in the ice [119], others
can only be inferred through in situ calibrations (see section 5.3) of the DOMs
after deployment. The DOM efficiency level is used as a handle in simulations
and can be tuned to fit calibration data. Deviations from the baseline is used as
a tool to evaluate the systematic uncertainty of the analysis, see section 10.8.
6.2 Neutrino Event Weighting
In the beginning of chapter 5 we discussed the expected event rate in kilometersized neutrino detectors, see equation 5.2, where the expected number of
events is given as a convolution of the neutrino flux with the effective area.
The latter can simplistically be calculated as:
Aeff = Agen ·
Nd
,
Ngen
(6.1)
where Agen is the generation area inside which we generated a total of Ngen
events. Nd is the number of detected events in the detector at the level of the
event selection where Aeff is evaluated.
Our simulations are biased in many different ways to make them more efficient, in particular to save computing time. E.g. we force the neutrinos
to interact in the vicinity of IceCube, so that the secondaries will trigger the
detector, and we generate events according to a spectrum E−γ some times different from the desired one. Equation 6.1 does not take any of these factors
104
into account and hence needs to be modified to accurately represent the effective area. The total number of generated particles, Ngen , corresponds to Ngen
unbiased particles. In a bin ( j, k) in energy and zenith angle, respectively, this
is defined as:
E j+1
−γ
Ngen E j E dE 2π · [cos θk − cos θk+1 ]
( j,k)
·
,
(6.2)
Ngen = ( j,k) · E
max −γ
Ω
P
E dE
int
Emin
( j,k)
where Pint is the neutrino propagation and interaction probability for bin
( j, k), Emin and Emax are the minimum and maximum generation energy, respectively, γ is the spectral index of the generation spectrum, and Ω is the
solid angle from which events were generated.
Replacing Ngen with Ngen in equation 6.1 gives us the effective area for each
event i (among Nd ), in a bin ( j, k) in energy and zenith angle, respectively:
Emax
−γ
Agen ( j,k) Emin E dE
Ω
( j,k)
Aeff (E, θ) =
·
· Pint · E
j+1
Ngen
E−γ dE 2π · [cos θk − cos θk+1 ]
Ej
(6.3)
=
where
ej+1
ej
OneWeighti
,
Ngen · Ωk · (E j+1 − E j )
−γ
E−γ dE ≈ E j · (E j+1 − E j ), Ωk = 2π · [cos θk − cos θk+1 ], and
OneWeighti =
( j,k) Pint
−γ
Ei
·
Emax
E−γ dE · Agen · Ω.
(6.4)
Emin
‘OneWeight’ is given in the units GeV cm2 sr1 .
The effective area for an event sample as function of energy is then formed
by a weighted sum in each energy bin, where each event i is weighted according to equation 6.3. A similar construction can be made for the so-called effective volume, i.e., the effective volume is the equivalent geometric volume over
which the detector is 100% effective for interactions occurring inside. This
quantity is particularly interesting for an analysis of particles starting inside a
detector, since it also take into account the finite length of the tracks.
Additionally, for a given neutrino flux model dΦν (Eν )/dEν , we can calculate the number of expected events Nd , see equation 5.2, by re-weighting
the simulation using the individual event weights ‘OneWeight’, introduced in
equation 6.4. The detector efficiency for a neutrino with a given energy and
direction is included in these weights, and to represent the desired spectral
distribution, these only have to be combined with the right amplitudes. The
number of expected events is therefore calculated as:
wi ,
(6.5)
Nd = T ·
105
where
wi =
OneWeighti dΦν (Eν )
·
,
Ngen
dEν
(6.6)
T is the integrated livetime and the sum runs over all remaining events at
the level where Nd is evaluated. The flux dΦν (Eν )/dEν is given in the units
GeV−1 cm−2 s−1 sr−1 .
Summary
In this chapter we discussed the IceCube simulation chain used to produce the
samples of atmospheric and astrophysical neutrinos, as well as atmospheric
muons, for the analysis presented in chapter 10. In the next chapter we will
describe the reconstructions and methods used to determine the interaction
vertex, incoming direction, and energy for each event. These quantities are
the foundation of the point source likelihood introduced in chapter 9.
106
7. Reconstruction Techniques
“Nothing happens until something moves.”
Albert Einstein
To estimate the characteristics of an event such as its direction, energy and
type (lepton flavor, topology, interaction, etc.) we perform several reconstructions based on the hit1 pattern in the detector using e.g. the time, position,
and charge of the individual pulses. These reconstructions are based on noise
cleaned pulse-series, as described in section 7.1.
7.1 Noise Cleaning
The noise cleaning tries to identify pulses related to a potential particle in the
detector and remove pulses created by noise. This is typically tailored for specific applications: we use different cleaning algorithms to reconstruct the neutrino interaction vertex and direction, respectively. Angular reconstructions
are quite sensitive to noise hits and therefore require rather strict cleaning. On
the other hand, semi-isolated hits of low quality can be crucial for determining
whether a track starts within the detector or came from outside. Throughgoing tracks may sometimes ‘leak in’, with very few hits recorded in the outer
parts of the detector array. In order to make a correct event classification those
remote hits should definitely not be removed by cleaning.
The pulse-series used for the reconstruction of the neutrino interaction vertex described in section 7.4 is cleaned in two steps, first by applying a so-called
Time Window Cleaning (TWC), rejecting pulses outside the range [-4, 6] μs relative to the first trigger window in time. Causality information of the pulses is
used in the second step consisting of a so-called Classic RT-Cleaning (CRT)
algorithm where all pulses (HLC and SLC) within a radius (R) of 150 m and
a time (T) of 1 μs from each other are kept while pulses not falling into this
1 Note
that in this chapter we will use the word pulse and hit interchangeably. They will both
refer to a pulse as the entity extracted using the pulse extraction processes discussed in section
5.5.
107
category, are rejected. This cleaning keeps roughly 96% of the physics hits
and about 18% of the noise hits. The pulse-series resulting from this cleaning
will be denoted RTTWOfflinePulsesFR.
For the angular reconstructions we instead start by applying a so-called
Seeded RT-Cleaning (SRT) algorithm which is inspired by the CRT algorithm.
It initially only considers an already quite clean subset of pulses, a seed consisting of a so-called core of HLC hits. This core consists of those HLC hits
which fulfill a stronger version of the RT criterion, namely that have not just
one but at least two other HLC hits within the RT range. Starting from the
core, pulses (HLC and SLC) within the RT-conditions from any pulse in the
seed set are added to the seed set. This adding procedure is iterated two more
times. After this we apply a TWC rejecting pulses outside the range [-4, 10]
μs relative to the first trigger window. This cleaning is slightly stricter than the
cleaning used to reconstruct the vertex, it keeps roughly 92% of the physics
hits, while rejecting all but 3% of the noise hits. The pulse-series resulting
from this cleaning will be denoted TWSRTOfflinePulses.
The upper row of figure 7.1 shows a simulated muon-neutrino event before
(left) and after (right) the cleaning process used for the angular reconstructions.
7.2 Particle Direction
Precise determination of the direction of an event is very important in a point
source analysis since the algorithm, described in chapter 9, is looking for a
clustering of event directions with respect to an almost isotropic background.
We use a number of different algorithms to determine the direction, varying
both in accuracy and computing time. The goal of the track reconstruction is
to estimate the most likely direction and position of the muon in the detector,
assuming that most of the recorded pulses in the event were indeed due to Cherenkov light emitted by a muon. The timing and location of the participating
DOMs provide the most important handle on the determination of these parameters. Note that energy is kept fixed during the directional reconstructions
and considered separately in section 7.5.
A simpler first guess algorithm is used as starting points, seed, to more complex ones in a consecutive chain of reconstructions. The more advanced and
time-consuming algorithms can only be applied at a high level of the analysis,
where the number of events has been reduced significantly.
In many branches of astronomy they use a so-called standard candle, a
celestial object of known location and brightness, to calibrate their equipment.
See e.g., the discussion in 3.6.1. Since such a localized astrophysical neutrino
source has yet to be discovered we cannot calibrate the angular reconstruction
and pointing capabilities of the detector directly. However, studies of the socalled cosmic ray Moon shadow as predicted by Clark already in 1957 [133]
108
Figure 7.1. Illustration of noise cleaning and reconstruction in IceCube. All of the
panels illustrate the same simulated muon-neutrino event with interaction inside the
instrumented volume. The color indicates the time of the event (from red to blue) and
is adjusted to fit the timing of the cleaned event shown in the upper right panel. The
applied cleaning corresponds to the algorithms used for the angular reconstructions.
The upper left panel shows the uncleaned event. The size of the spheres is approximately proportional to the charge observed. The lower panels show the true muon
track in gray, with a matching dot indicating the neutrino interaction point. The left
illustration also displays the first guess, so-called Improved LineFit, reconstruction
discussed in section 7.2.1 (in red). The lower right plot shows the full likelihood reconstruction denoted SPEFit2 in blue, including an illustration of the Cherenkov-cone
hypothesis. This event was successfully reconstructed by SPEFit2 and constitutes a
prime example of a typical starting track-like event.
109
can be used as a first approximation. Figure 7.2 shows the observed position of
the Moon shadow, denoted μ, measured using one year of data when IceCube
operated in a partial configuration with 59 strings prior to completion [134].
The horizontal axis shows the difference in right ascension (αμ −αMoon )·cos δμ
relative to the Moon position, while the vertical axis shows the corresponding
difference in declination, δμ − δMoon . The position of the observed shadow
agrees with the position of the Moon with a high statistical significance. The
conclusion from this study is that the systematic uncertainty of the absolute
pointing is smaller than ∼ 0.2◦ .
In this thesis we focus on the detection of neutrino-induced muons. These
are detectable and distinguishable from cascade type events above an energy
of ∼ 100 GeV. This limit is primarily set by the geometrical parameters of the
IceCube detector. A muon in the minimum ionizing energy range can travel
about 5 m per GeV in ice, see figure 4.4. With a horizontal string spacing of
125 m and a vertical DOM-to-DOM spacing of 17 m, this implies a minimum
neutrino energy of about 50 − 100 GeV if the final state muon is to be reconstructed. To further distinguish muon tracks from cascades, a slightly higher
energy is required.
δμ - δMoon [°]
0.4
0.2
0
-0.2
-0.4
1σ
2σ
3σ
-0.4
-0.2
0
0.2
0.4
(αμ - αMoon) cos(δμ) [°]
Figure 7.2. The cosmic ray Moon shadow as measured by IceCube [134]. The black
dot indicates the best-fit position of the shadow from a maximum-likelihood analysis,
while the white circle show the position after including effects of geomagnetic deflection. The contour illustrates the uncertainty of the position at the 1, 2, and 3 σ level.
Reprinted figure 9 with permission from [134]. Copyright 2014 by the American
Physical Society.
110
Neutrino-induced muons, although highly boosted in the forward direction,
will be produced with an angle relative to the direction of the primary neutrino.
The mean scattering angle can be approximated with ψνμ ≈ 0.7◦ /(Eν /TeV)0.7
[62]. For the energy range considered in this analysis this means the angle is
roughly between 0.1◦ and 3◦ . This constitutes a minimum kinetic bound to the
reconstruction capability. However, the reconstructed muon direction is still
the best available proxy for the neutrino arrival direction, and will be used as
such for the remainder of this thesis.
7.2.1 First Guess
The Improved LineFit algorithm [135] is used as a first guess track reconstruction. It is an approximative hypothesis that ignores the geometry of the
Cherenkov cone as well as the detector medium, i.e., photons travel on straight
lines without scattering or absorption in the ice. Instead, every hit is used as
an independent measurement of the position of a straight moving particle at
the time of the hit.
The algorithm minimizes the distance between the moving particle and the
hit DOMs for each event through a least-square fit [136]:
χ =
2
N
(ri −r0 −vLineFit · (ti − t0 ))2 ,
(7.1)
i=1
where N is the number of pulses, ri and ti is the DOM position and time of
each pulse (leading edge time), vLineFit is the muon track velocity as it passes
through a point r0 at an arbitrarily chosen time t0 .
The χ2 is modified to be robust against outliers by applying a so-called Huber penalty function. Further, the Improved LineFit algorithm also filters out
hits likely to be scattered by studying the arrival times of pulses in neighboring
DOMs.
The modified χ2 can be solved analytically and is therefore very fast with
a resulting median angular resolution of about a few degrees. The drawback
is the simple description that does not take into account the full complexity
of the Cherenkov cone or the scattering and absorption effects in the ice. In
fact, photons can move around for over 1 μs before being detected [135], corresponding to a muon traveling ∼ 300 m.
The magnitude of the resulting velocity vector, the so-called LineFit speed,
is close to the speed of light for track-like events, while it is close to zero for
cascade-like events due to the spherically symmetric hit pattern. The LineFit
speed and the individual components of the LineFit velocity vector all constitute variables used in the offline event selection presented in chapter 10.
111
Figure 7.3. Definition of variables used to calculate the time residuals of each event.
7.2.2 Likelihood-Based Reconstructions
The advanced directional reconstructions use a maximum-likelihood algorithm2
where the hit pattern is assumed to be caused by a relativistic (β = 1) infinite
muon track traversing the detector, causing emission of Cherenkov light. The
likelihoods are maximized numerically using Minuit [137] and the solution
depends on the expected photon arrival times at the DOMs and the optical
properties of ice.
The infinite track is defined by a = (r0 , p, t0 ) where r0 = (x0 , y0 , x0 ) corresponds to an arbitrary point along the track, t0 is the time at r0 , and p = (θ, φ)
defines the incoming direction of the track in spherical coordinates, see the
definition of the coordinate system in section 5.1.1.
The experimental data for each event consists of a pulse-series with DOM
positions ri and corresponding leading edge times ti . We introduce the socalled residual time tres,i for each pulse i. It is defined as the time difference
between the observed arrival time ti and the hypothetical arrival time, tgeo,i , of a
so-called direct photon, emitted under the Cherenkov angle, traveling straight
to the receiving DOM at position ri without scattering or absorption:
tres,i ≡ ti − tgeo,i ,
(7.2)
where tgeo,i is defined using the geometrical construction illustrated in figure
7.3.
p · (ri −r0 ) + d tan θc
tgeo,i = t0 +
,
(7.3)
c
where the Cherenkov angle θc is the fixed angle under which the Cherenkov
photons are emitted along the track3 and d is the closest distance from the
track to the DOM, i.e., unscattered Cherenkov photons are expected to arrive
practical reasons we often equivalently perform a minimization on the − log L instead of
maximizing L.
3 For the sake of clarity we neglect the effect of the small difference between group velocity and
phase velocity, see section 4.2.
2 For
112
to a DOM i located at ri at time tgeo,i , see figure 7.3. That is only true for
photons emitted from the Cherenkov emission point (see gray circle in figure
7.3). Photons which are emitted before or after that point and somehow land at
ri will arrive later than tgeo,i , even if they travel in a straight line. The strongest
effect on the photon arrival times tres,i is scattering and absorption in the ice
but also the orientation of the DOM axis relative to the track.
The unknown track parameters of a are determined from experimental data
by minimizing the expression [136]:
− log L(tres |a) = −
N
log p(tres,i |a),
(7.4)
i
where L is the likelihood function and p(tres,i |a) is the PDF representing the
probability of observing the experimentally measured values tres,i at ri given a.
Equation 7.4 is correctly defined as long as p(tres,i ,ri ) is provided and the
sum runs over all detected pulses. Typically however, we approximate this
relation by only using the time of the first recorded pulse on each DOM. Indeed
the first p.e. is likely to be the least scattered and hence carry most information
about the true track direction, i.e., we use the so-called SPE PDF, p1 (tres,i ),
describing the photon arrival time of a detected p.e. in the case when only one
photon is observed at each DOM.
This approximation is sufficient for events with low energies, but at higher
energies, where the light output is large enough to produce several direct or
semi-direct photons in each DOM, the likelihood becomes distorted. This is
due to the fact that the arrival time distribution for an arbitrary photon is much
wider than the distribution for the first photon. The latter is more strongly
peaked at smaller time residuals than the former. The Multi Photoelectron
(MPE) PDF modifies the SPE PDF to account for the total charge observed in
each DOM. The arrival time distribution of the first out of N photons is [136]:
p1N (tres,i ) =
N · p1 (tres,i ) ·
(N−1)
∞
p1 (t) dt
(7.5)
tres,i
The PDF p1 is determined with simulations of photon propagation in the
medium using two different approaches: an approximative description using
the analytic Pandel function [138] as described in [136] and splined look-up tables created using Photonics (see section 6.1.2). The MPE description is very
important for high-energy analyses, but it will only give a modest improvement over SPE in the low-energy events typical for the analysis presented in
this thesis.
We define direct hits by considering different classes of pulses based on
their individual tres value, see table 7.1. Variables based on hits of a particular
class are used in the offline event selection described in chapter 10.
113
0.0030
Perpendicular distance d [m]
PDF [ns−1 ]
0.0025
50
100
150
0.0020
0.0015
200
250
0.0010
0.0005
0.0000
0
500 1000 1500 2000 2500 3000 3500 4000
tres [ns]
Figure 7.4. Pandel time-residual PDFs for an infinite track hypothesis. The different
lines show the distribution for various values of the perpendicular distance d. The
distributions are smoothed using a Gaussian with a width of ∼ 15 ns set to match the
jitter time of the PMT [139]. Figure adapted from [106].
The Pandel Distribution
The probability to detect a photon with a given tres is given by [136]:
c
d
(d/λ−1)
1 τ−(d/λ) · tres
− tres · 1τ + medium
+ λa
λa
,
·e
p(tres ) ≡
N(d)
Γ(d/λ)
−d/λa
N(d) = e
τ · cmedium
· 1+
λa
(7.6)
−d/λ
,
(7.7)
where d is the perpendicular distance between the track and the DOM (indicated in figure 7.3), λa is the average absorption length, cmedium = c/n is the
Table 7.1. Definition of classes of direct hits used to construct several event selection
variables that is applied in chapter 10. The classes are based on the tres distribution
defined in equation 7.2.
Class
A
B
C
D
114
tres interval [ns]
[-15, 15]
[-15, 25]
[-15, 75]
[-15, 125]
speed of light in ice, and λ and τ are parameters determined empirically by
MC simulations. These PDFs parametrize the average characteristics of the
arrival times of photons and are further modified to include the effects of PMT
jitter [139] and dark noise. Although the function is only directly dependent
on the so-called source-receiver distance d, there is an implicit dependence on
the parameters a through the definition of tres . Figure 7.4 shows the Pandel
PDFs for an infinite track hypothesis at various source-receiver distances as a
function of tres .
Spline Parametrization
A more accurate description of the photon arrival PDFs can be achieved by
using the look-up tables created with the Photonics software as described in
section 6.1.2. These tables contain the expected number of photons (at a distance away from the track), and the PDF for the time residual at which each of
these photons arrives. The numbers in this table have been obtained from extensive photon transport simulations using depth dependent optical properties.
7.2.3 Reconstructions and Performance
The minimization of equation 7.4 yields a set of best-fit parameters for a hypothesis â. The so-called reduced log-likelihood rlogl is introduced as a quality parameter of the reconstruction, and corresponds to the reduced χ2 for a
Gaussian probability distribution. It is defined as the minimum value of the
negative log-likelihood, − log L, divided by the number of degrees of freedom
(the number of data points (hits) minus the number of free parameters in the
fit).
For the track reconstructions we maximize the likelihood numerically to
find the best-fit five parameter solution consisting of the vertex point ( x̂0 , ŷ0 , ẑ0 ),
and direction (θ̂, φ̂). The numerical minimizer finds its result in an iterative
manner. The likelihood function and its gradient are evaluated in a few phase
space points. New phase space points are selected based on the results of the
first set, hopefully closer to the minimum. The likelihood function and its
gradient are evaluated for these new phase space points, and new points are
selected based on the results, etc. These minimizer-iterations are performed,
on average 100 − 200 times (maximally 10,000 times), in order to approach
the minimum within the configured tolerance.
The likelihood space can be quite complex and contain several local minima. The minimizer will typically land in the one that is the easiest to reach
by just rolling downhill in the 5-dimensional − log L landscape. To avoid getting stuck in such local minima, each reconstruction is performed with several
so-called iterations. The seed to each iteration is partly based on the previous
successful iteration, but rotated around its vertex position by picking pseudorandom directions (θ, φ) on a unit sphere. The vertex point is allowed to be
115
shifted along the track both before and after the rotation. It is typically moved
to the position of the smallest perpendicular distance to the so-called Center
of Gravity (COG) where the hits are spatially accumulated. The COG is defined as the charge weighted mean of DOM positions. Further, the time at the
vertex is transformed accordingly. The goal of these shifts is to modify the tres
distribution to better agree with a Cherenkov model (the minimizer has a hard
time to get unstuck if given a seed with too many negative time residuals).
SPEFit2 and MPEFit
One of the main reconstructions used in the analysis is denoted SPEFit2, an
iterative fit with a single iteration, performed according to the prescription
above. It is constructed using the SPE PDFs and the approximative Pandel
function. The result from the Improved LineFit reconstruction is used as seed.
The reconstruction is used in the online data pre-selection described in chapter
8 and when forming the veto criteria for the offline event selection described
in chapter 10. MPEFit is an iterative fit using the MPE function in equation
7.5, and is based on the Pandel function just like SPEFit2. It is constructed
with a single fit based on the SPEFit2 as input seed.
MPEFitSpline
The MPEFitSpline reconstruction is an infinite track reconstruction using MPE
PDFs, see equation 7.5, now based on splined tables of the arrival times of the
photons. It is used in the final likelihood analysis to search for clustering of
signal-like events. MPEFit is used as a seed to this reconstruction.
Figure 7.5 shows the likelihood space of the MPEFitSpline reconstruction
for six simulated muon-neutrino events at the final analysis level, see section
10.5. The plot in figure 7.6 shows the median likelihood space of simulated
events at the final analysis level. Each bin in the represents the weighted median of ln L using 282 events, weighted to a neutrino spectrum proportional to
−Eν /10 TeV .
E−2
ν e
Performance
The performance of the different reconstructions is evaluated using the socalled space angle ψ defined as the opening angle between two directional
vectors on a three-dimensional sphere. Using the spherical coordinates defined
in section 5.1.1 it is defined as:
cos ψ = cos θ1 cos θ2 + sin θ1 sin θ2 cos(φ1 − φ2 ),
(7.8)
where index 1, 2 refers to the two vectors compared.
Figure 7.7 shows the space angle distributions for simulated neutrino events
at final level of the event selection for various reconstructions. All events used
in the calculations are truly starting inside the instrumented detector volume,
using criterion p = 0.8 and z ≤ 300 defined in chapter 8. Further, they are truly
116
Figure 7.5. These figures show examples of the highly complex likelihood space of
the MPEFitSpline directional reconstruction for six simulated muon-neutrino events.
They are shown in local coordinates (θ, φ) at the final analysis level. The primary
neutrino energy is indicted in the upper left corners of each plot. The colors represent
the value of ln L and the intersection of the dotted black lines shows the reconstructed
best-fit direction. When the true muon position is located within the plot boundaries
it is illustrated with a white circle.
117
Figure 7.6. The figures illustrates the median likelihood space ln L of the MPEFitSpline directional reconstruction. It is shown in local coordinates (θ, φ) for 282 simulated muon-neutrino events at final level, weighted to a neutrino spectrum proportional
−Eν /10 TeV spectrum. The best-fit point of each event was moved to the coordito E−2
ν e
nates (0,0).
down-going (i.e., originate from the SH), and interacts through CC interaction
producing a charged muon. The gray solid line represents the kinematic limit,
i.e., the scattering angle between the direction of the primary neutrino and the
produced muon track. The black solid, dashed, and dotted lines show the angle between the direction of the primary neutrino and the SPEFit2, MPEFit,
and MPEFitSpline reconstruction, respectively. As expected, they all perform
about the same, with a small improvement using MPEFitSpline, see table 7.2.
MPEFitSpline is therefore chosen for the final point source likelihood. Nevertheless, the median angular resolution of the final signal sample is 1.7◦ for
−Eν / 10 TeV with a central 90% energy range
a neutrino spectrum given by E−2
ν e
roughly between 100 GeV and 10 TeV.
7.3 Angular Uncertainty
To search for neutrino point sources within an almost isotropic distribution
of background events it is important to not only have a good reconstruction
but also a good handle on its uncertainty, even more so when dealing with
potentially quite weak sources. During the offline event selection, we explicitly select events with good reconstruction quality and small estimated angular
118
100
a.u.
10−1
10−2
10−3
10−4
0
5
10
15
20
15
20
◦
Space Angle ψ [ ]
1.0
0.8
a.u.
0.6
0.4
0.2
0.0
0
5
10
◦
Space Angle ψ [ ]
Figure 7.7. Space angle distribution of the reconstructions used in the analysis. The
plots show simulated neutrino events at final level of the event selection, with event
−Eν /10 TeV . The
weights corresponding to a neutrino spectrum proportional to E−2
ν e
gray line represents the kinematic limit, defined as the scattering angle between the
primary neutrino and the produced muon. The black solid, dashed, and dotted lines
show the angle between the primary neutrino and the SPEFit2, MPEFit, and MPEFitSpline reconstruction, respectively.
119
Table 7.2. Median space angle of the reconstructions used in the analysis, calculated
using equation 7.8. Values are calculated using simulated neutrino events at final
level of the event selection, see section 10.4. All events are truly starting inside the
instrumented detector volume, using criterion p = 0.8 and z = 300 defined in chapter 8. Further, they are truly down-going (i.e., originate from the SH), and interact
through CC interaction producing a charged muon. The kinematic limit represents the
scattering angle between the direction of the primary neutrino and the produced muon
track. As small improvement is seen for the MPEFitSpline reconstruction relative to
LineFit, SPEFit2 and MPEFit, why it is chosen for the final point source likelihood.
Kinematic
LineFit
SPEFit2
MPEFit
MPEFitSpline
E−2
ν
0.17◦
4.3◦
2.1◦
1.7◦
1.4◦
−Eν / 10 TeV
E−2
ν e
0.41◦
4.4◦
2.0◦
1.9◦
1.7◦
uncertainty. The angular reconstruction uncertainty is estimated individually
for each event from the likelihood function used in the reconstruction. Two
different algorithms are used in the event selection, each presented below.
Cramér-Rao
The Cramér-Rao reconstruction uses the Fisher information matrix of the reconstruction SPEFit2 to estimate the angular uncertainty. It is a relatively
simple analytic calculation without iterative numerical approximations, and
is hence fast and can be computed for many events at an early level of the
analysis. The estimated angular uncertainty is defined as:
σ2θ + sin2 θ · σ2φ
,
(7.9)
σCramer−Rao =
2
where σθ and σφ are the estimated directional uncertainties of θ, φ, respectively. The uncertainties are derived using the so-called Cramér-Rao bound,
stating that the inverted Fisher information matrix gives a lower bound to the
true resolution. The matrix is based on the behavior of the per-DOM PDFs
near the final fit results.
Paraboloid
The paraboloid fit is a numerical method that uses the actual likelihood space
of the reconstruction considered to estimate the angular uncertainty. This fit is
used at level 7 of the event selection, see section 10.4.7, to provide the angular
uncertainty of the MPEFitSpline reconstruction.
120
The estimated angular uncertainty is defined as:
σ21 + σ22
σReco =
,
2
(7.10)
where σ1(2) is the major (minor) axis of an ellipse that constitutes the section at
constant log-likelihood, the value of − log(L) has changed by 1/2 with respect
to the best-fit value, of a paraboloid fitted to the likelihood space around the
reconstruction minimum [140]. This construction corresponds to a 1 σ confidence region (39% containment) for the the reconstructed zenith and azimuth.
The paraboloid algorithm often underestimates the true angular uncertainty.
Therefore, we apply an energy dependent correction based on the median of
the pull. This is presented in detail in chapter 10. The pull itself is defined as
the ratio of the true angular error ψTrue to the estimated angular error σReco for
each event:
ψTrue
.
(7.11)
Pull =
σReco
Events with a corrected value above 5◦ were rejected after optimization studies
of the final sensitivity. The paraboloid fit was used as the angular uncertainty
estimator in the final likelihood analysis.
7.4 Interaction Vertex
For a given directional reconstruction, we now reconstruct the length of the
track in the detector, while keeping the direction fixed. We use the finiteReco
algorithm [141] and the pulses in RTTWOfflinePulsesFR. The SPEFit2 result
is used as a seed. The goal of the algorithm is to calculate the starting and/or
stopping point using a finite track hypothesis. The starting point is used as
a proxy for the true neutrino interaction vertex in several steps of the event
selection. In particular, it is one of the most important variables in the creation
of the initial veto that makes it possible to select events from the full SH at
low-energies, see chapter 8.
In a simple first-guess approach it calculates the first emission point by projecting all hits within 200 m of the seed track , onto the seed track under the
Cherenkov angle, and returning the first point of emission in time, see illustration in figure 7.8, where the dashed lines represent a cylinder with radius 200
m surrounding the track.
A more advanced algorithm is performed at a higher level of the analysis.
It minimizes a likelihood that estimates the starting and stopping point of the
muon by considering the so-called no-hit probability of DOMs, the probability to not observe photons, given a infinite/finite track hypothesis. This is
illustrated in figure 7.9. The DOMs in the shaded regions, marked ‘Selected
DOMs’, before and after the starting and stopping point respectively, did not
121
Figure 7.8. Illustration of the first-guess construction in the finiteReco algorithm. Figure adapted from [142].
detect hits, and are considered in the likelihood. The time of the vertex is
allowed to shift to provide Cherenkov-like time residual distributions. Note
that through the seed reconstruction, there is an implicit dependence on the hit
probability for the DOMs located between the shaded regions.
The probabilities are defined as follows:
• p(noHit | Track) is the probability of the so-called ‘Track’ hypothesis,
i.e., the probability to not observe a hit under the assumption of an infinite track, and
• p(noHit | noTrack) is the probability of the so-called ‘noTrack’ hypothesis, i.e., to not observe a hit under the assumption of a track starting and
stopping at the reconstructed starting and stopping point, respectively.
Several different topologies can be distinguished: starting track, throughgoing track, stopping track, and fully contained track. The photon arrival time
PDFs, used for the ‘Track’ hypothesis, are taken from the look-up tables created using Photonics as described in section 6.1.2. From these tables we extract the expected number of photoelectrons λ, as a function of depth in the
ice as well as the perpendicular distance and orientation of the DOMs relative
to the specified track hypothesis. For the ‘noTrack’ hypothesis, λ is sampled
from a noise hit distribution.
The general no-hit probability is defined as [141]:
pλ (noHit) = pλ (0) =
122
λ0 −λ
e = e−λ .
0!
(7.12)
The likelihood L is defined by multiplying the individual contributions
from each selected DOM i [141]:
pi (noHit|Track),
(7.13)
L(noHit|Track) =
i
L(noHit|noTrack) =
pi (noHit|noTrack).
(7.14)
i
The likelihood ratio between the starting track (‘noTrack’) hypothesis and
the infinite track hypothesis is maximized with respect to the starting and stopping position respectively. Technically, the stopping point is calculated first,
keeping the starting point fixed at the position given by the first-guess algorithm. The resulting stopping point is then kept fixed in a similar determination
of the starting point.
The likelihood is used as a quality parameter to estimate the statistical confidence in the fit. We also derive a so-called reduced log-likelihood (rlogl)
value similar to the one defined in section 7.2.3. Furthermore, the individual
likelihood for different track hypotheses can be computed through equation
7.13 and 7.14. In this case no minimization is done, but the probability is
calculated based on the starting and stopping point from the first-guess algorithm. The ratio between the probability under a starting and infinite track
hypothesis, respectively, defines one of the variables used in event selection.
Figure 7.9. Illustration of the advanced finiteReco reconstruction. The DOMs in the
shaped region are selected for the calculation of no-hit probability under either a ‘noTrack’ or ‘Track’ hypothesis. The reconstructed starting and stopping points are evaluated one at a time for technical reasons. These reconstructions are seeded with the
points from the first-guess algorithm illustrated in figure 7.8. Illustration adapted from
[142].
123
7.5 Energy
High-energy muons lose most of their energy in multiple stochastic processes
along the track, see figure 4.4. This leads to large fluctuations in the energy
deposition pattern between muons of the same energy. Muon-neutrinos that
interact via a CC process inside the detector volume are characterized by the
lack of energy depositions before the interaction vertex, a hadronic cascade at
the interaction vertex and subsequent stochastic losses along the muon track,
i.e., the energy loss pattern deviates significantly from the assumption of a
constant average energy loss along the track.
Figure 7.10 shows a comparison between the primary neutrino energy and
the muon energy for simulated events. These are highly correlated although
the range of neutrino energies corresponding to a given Eμ is larger for lower
Eμ . Moreover, since low-energy starting events are typically approximately
contained in the detector, the reconstructed energy can be used as a proxy for
the true neutrino energy.
7.5.1 Millipede
The energy loss pattern is reconstructed using an algorithm called Millipede. It
fits hypothetical cascade-like energy depositions along the reconstructed track
[143]. The depositions are modeled by dividing up the track in 15 m track
segments and treating each segment as a cascade with a direction equal to that
of the track and an energy that is completely independent from the energies
deposited at the other segments.
Figure 7.10. Muon energy as a function of primary neutrino energy. The plot is shown
−Eν /10 TeV signal spectrum.
for events weighted to a E−2
ν e
124
Figure 7.11. An illustration of the results from the Millipede energy unfolding algorithm. This particular event has a large reconstructed deposition outside of the instrumented volume of the detector. The size of the spheres indicate the DOM charge and
the energy depositions in logarithmic scale. The timing of the pulses in TWSRTOfflinePulses are shown in gray scale (white = early, black = late). The orange spheres,
positioned along the track (MPEFitSpline, black arrow), represent the positions of the
reconstructed energy depositions. Depositions that appear outside the detector border
are rejected when constructing the final energy proxy.
Millipede is based on a Poissonian likelihood L with mean value λ = ρ +
i E i Λi , for the number of photoelectrons k observed in a DOM, given ρ expected noise hits and the expected numbers of photoelectrons per unit energy
Λi for track segments i with energy Ei [143]:
lnL = k ln(λ) − λ − ln(k!)
The likelihood can be rewritten as [143]:
+ ρ − E · Λ
− ρ − ln(k!).
ln L = k ln E · Λ
(7.15)
(7.16)
The final energy deposition pattern is solved numerically for the energy losses
E by minimizing the likelihood via a linear unfolding [143]:
k − ρ = Λ · E.
(7.17)
125
Millipede solves the energy loss distribution and returns the charge and location of each reconstructed energy loss. For further details see [143]. The
result of such a calculation for a simulated starting neutrino event is shown in
figure 7.11. The spheres in gray scale indicate the size and timing of pulses
in TWSRTOfflinePulses, while the orange spheres, positioned along the track
are the energy depositions fitted by the Millipede algorithm. An example of
an energy loss reconstructed to have occurred outside of the instrumented detector volume is clearly visible on the left side of the figure. Such depositions,
so-called outliers, are removed and a final estimate of the deposited energy is
formed by summing up all remaining depositions.
Although Millipede reconstructs the visible energy of a muon track in the
detector, the final value is used as an energy proxy for the true neutrino energy. The correlation for simulated neutrino events, before and after removal
of the outliers can be seen in figure 7.12. The upper plot shows the sum of energy depositions reconstructed with Millipede (including outliers), while the
bottom plot shows the final energy proxy. Both are plotted as a function of
the true neutrino energy. As seen in the top plot, large depositions outside the
detector shift the energy proxy to values far away from the true. These overestimates are thought to be caused by the fact that the implementation of the
algorithm used, cannot distinguish between a larger deposition outside of the
detector producing a small signal on the edge, and a small noise hit on that
same edge. By only including depositions that appear inside the geometrical
volume (defined as 100% of the distance from string 36 to the outer layer corner DOMs and within ±500 m in z-coordinate, see figure 8.1) we can improve
the correlation. This energy proxy is useful all the way down to 100 GeV.
Summary
In this chapter we have discussed the different event reconstructions used in
this thesis to quantify each event in terms of direction, energy, and quality. In
the next chapter we describe the initial online filter developed exclusively for
the Low-Energy Starting Events (LESE) analysis. This analysis is described
in detail in chapter 10.
126
Figure 7.12. The upper plot shows the sum of all energy depositions reconstructed
with Millipede, while the bottom plot shows the final energy proxy. Both are plotted as
a function of the true neutrino energy. Note that the plots show weighted distributions
−Eν / 10 TeV .
corresponding to a neutrino signal spectrum of E−2
ν e
127
8. Opening Up a Neutrino Window to the
Southern Hemisphere
“The real voyage of discovery consists not in seeking
new lands but seeing with new eyes.”
Marcel Proust
Rejection of the atmospheric muon background is of prime interest for most
IceCube analyses, particularly for the ones targeting the Southern hemisphere
(SH), where the Earth cannot be used as a filter. The traditional online filters
for the SH are effectively using a charge cut, focusing on keeping events above
100 TeV, as a means to reduce the background of atmospheric muons. To reach
lower energies, where the background is large, a new filter had to be developed
utilizing parts of the detector as veto for incoming muon events.
This chapter describes the development of the so-called Full Sky Starting
(FSS) filter, a filter selecting starting events. It is designed to optimize the
event collection at neutrino primary energies < 100 TeV, for a wide range of
analyses such as: studies of the Galactic Center (GC) and the Sun as sources
of neutrinos from WIMP interactions, SH neutrino point source analyses (both
time dependent (transients) and time independent), and studies of the galactic
plane and other extended astrophysical objects.
The FSS filter has been used successfully in two point source analysis dedicated to events below 100 TeV. One, operating in the region between 100 GeV
and a few TeV, is the main topic of this thesis and is covered in chapters 9 and
10. The other focuses on energies in the range from a few TeV to about 100
TeV and is presented in [144]. Studying neutrino emission from the region
of the Galactic plane would be a natural extension once a multi-year sample
exist, but this analysis has not yet been designed.
Since the main background of muons is produced in the atmosphere, they
always enter the detector from the outside. However, due to the sparsely distributed DOMs some muons, especially at low energies, may “leak” in without
leaving a detectable signal in the outer detector layers. The filter utilizes the
outermost parts of the IceCube detector to construct vetoes against incoming
atmospheric muons, searching for starting events. At the same time we try to
129
Figure 8.1. Bird’s eye view of IceCube illustrating the definition of the vetoes used
in the event selection. The outermost (light gray) polygon represents the full distance
(p = 1.0) between each of the eight (8) corner strings (1, 31, 75, 78, 72, 74, 50, 6)
and string 36. The dark gray polygon illustrates the cut region defined by p = 0.9, i.e.,
defined by 90% of the distance from string 36 to each of the outermost corner strings.
Similarly the dashed gray polygon represents p = 0.8, the region used to define truly
starting signal events. See figure 5.6 for further details. Credit: Illustration by Henric
Taavola, modified by including polygon shapes to indicate cut and fiducial regions
used in the analysis.
maximize the active, so-called fiducial, volume, not only utilizing the denser
DeepCore sub-array but also large parts of the standard in-ice strings.
Historically, simpler starting event filters have been running with partial detector configurations even during the construction of IceCube. However, these
targeted specific regions of interest, such as the Galactic Center, or focused on
low-energy events appearing primarily in the DeepCore array. The first implementation of the FSS filter ran in the first year of the completed 86-string
detector. Today it constitutes one of the standard IceCube filters with an accumulated livetime of more than 4 years. Since the filter accepts events also
from the SH it has a very high passing rate and was made simpler to implement
through the development of a compact data storage format called SuperDST
(Super Data Storage and Transmission). Prior to and during the first year of
130
Figure 8.2. Zenith angle distributions before the FSS filter. The green line represents the true neutrino direction, for a simulated assumed isotropic signal neutrino
−Eν /1 TeV (solid), E−2 e−Eν /10 TeV
flux, weighted to the following signal spectra: E−2
ν e
ν
−3
(dashed), and Eν (dotted). All other distributions show the reconstructed zenith angle (SPEFit2). Experimental data is shown as black dots and is further illustrated by
the gray shaded area. Signals are shown as blue (orange) lines, following the same
style as the green lines for the corresponding spectrum, and illustrate neutrino signal
spectra with (without) cuts on the true direction and true interaction vertex position
as described in section 10.4. All signal distributions are normalized to experimental
data.
operation, intense tests were made to ensure the stability and implementation
of both the filter algorithm and the new data format.
Figure 8.1 shows a bird’s eye view of IceCube and illustrates the definition
of some important parameters used to define the vetoes used in the event selection. The fractional distance from string 36 to the outermost corner strings
(1, 31, 75, 78, 72, 74, 50, 6) is denoted p, and define the corners of a polygon
shaped region. The outermost light gray polygon represents the full distance
(p = 1.0). The dark gray polygon illustrates the cut region, defined by p = 0.9,
e.g., used in the third step of the FSS filter as described in section 8.1. Similarly the dashed gray polygon represents p = 0.8, the region used to define
truly starting signal events in the analysis.
The Low-Energy Starting Events (LESE) analysis presented in chapter 10
builds upon the development of the FSS filter and would not have been possible without it. The event distribution for the final event sample is significantly
different in terms of energy and direction compared to classic point source
analyses in IceCube using through-going muons, see figure 10.2.
131
The zenith distributions, before the application of the FSS filter, are shown
in figure 8.2. The green line represents the true neutrino direction, for a
simulated assumed isotropic signal neutrino flux, weighted to signal spectra:
−Eν /1 TeV (solid), E−2 e−Eν /10 TeV (dashed), and E−3 (dotted). These are all
E−2
ν e
ν
ν
peaked at the horizon due to the integration over space angle. All other distributions show the reconstructed zenith angle (SPEFit2). Experimental data
(dominated by atmospheric muons) is shown as black dots and is further illustrated by the gray shaded area. Signals are shown as blue (orange) lines,
following the same style as the green lines for the corresponding spectrum,
and illustrate neutrino signal spectra with (without) cuts on the true direction and true interaction vertex position as described in section 10.4. These
cuts are made to illustrate the distribution of neutrinos truly starting inside
the detector volume. Hence, the blue distribution with θ > 90◦ corresponds to
mis-reconstructed events from the Southern sky (θ ≤ 90).
8.1 The FSS Filter
The FSS filter consists of several consecutive steps, both hit pattern based
vetoes as well as cuts on the reconstructed interaction vertices. The former are
based on a map of uncleaned HLC pulses. The filter algorithm is applied to
each event and can be described in the following steps:
i. A hit pattern based top veto rejects events with HLC pulses in any of the
DOMs in the five topmost DOM layers. This veto is applied to DOM with
numbers 1-5 among the 78 standard strings1 .
ii. A hit pattern based side veto rejects events where the first HLC pulse
occurs in a DOM on any of the strings of the outermost string layer.
The string layers are conveniently defined using the bird’s eye view of
IceCube, see figure 8.1. This veto is applied for strings in layer 12 .
iii. The neutrino interaction vertex is reconstructed using the simple finiteReco
algorithm illustrated in figure 7.8.
iv. A cut on the reconstructed vertex in the (x, y)-plane is performed. The
event is rejected if the interaction vertex is reconstructed to be outside of
the polygon shaped region defined by p = 0.9.
v. A cut on the reconstructed z-position of the vertex is performed. The event
is rejected if the z-coordinate of the interaction vertex is reconstructed
above 400 m.
The filter acceptance rate varies around 190 Hz and is dominated by atmospheric muons leaking in through the veto layers. Note that the filter does not
1 The standard IceCube strings contain 60 DOMs each, deployed at roughly the same depth.
The
DOMs on each string are numbered starting from the top with number 1.
2 The outermost layer, layer 1, consists of strings 1-7, 13, 14, 21, 22, 30, 31, 40, 41, 50, 51, 59,
60, 67, 68, 72-78.
132
(a) Experimental data before FSS.
(b) Experimental data after FSS.
(c) Signal E−2 e−E/10 TeV before FSS.
(d) Signal E−2 e−E/10 TeV after FSS.
(e) Signal E−3 before FSS.
(f) Signal E−3 after FSS.
Figure 8.3. The plots show the reconstructed interaction vertex distributions in the
(x, y)-plane before (after) application of the FSS filter in the left (right) column. The
upper row shows experimental data (color shows the rate), while the second and third
−Eν /10 TeV and E−3 respecrow illustrate the distributions for signal weighted to E−2
ν e
ν
tively (color shows the rate normalized to exp. data). The string positions are shown
as black dots, while the polygon shape represents the containment criteria (p = 0.9)
applied in the side veto of the FSS filter. See text for further explanation of events
outside the polygon.
133
(a) Experimental data before FSS.
(b) Experimental data after FSS.
(c) Signal E−2 e−E/10 TeV before FSS.
(d) Signal E−2 e−E/10 TeV after FSS.
(e) Signal E−3 before FSS.
(f) Signal E−3 after FSS.
Figure 8.4. The plots show the reconstructed interaction vertex distributions before
(after) application of the FSS filter in the left (right) column. The upper row shows
experimental data (color shows the rate), while the second and third row illustrate the
−Eν /10 TeV and E−3 respectively (color shows
distributions for signal weighted to E−2
ν e
ν
the rate
normalized
to
exp.
data).
The
z-coordinate
is
shown
as a function of the radius
r = x2 + y2 . The applied cut is shown as a horizontal black line at z = 400 m. The
effect of cutting is seen clearly in the plots in the right column. See text for further
explanation of events above 400 m.
134
Figure 8.5. finiteReco z-coordinate, before (upper) and after (lower) the FSS filter.
Experimental data is shown as black dots and is further illustrated by the gray shaded
area. The background simulation of atmospheric muons is presented in solid red line
−Eν /1 TeV , (dashed)
(only in lower plot). Signals are shown as blue lines: (solid) E−2
ν e
−Eν /10 TeV , (dotted) E−3 . Signal distributions are normalized to experimental data.
e
E−2
ν
ν
In the upper plot we also show orange lines representing simulated signal without
cuts on true direction and true interaction vertex position, as well as green lines representing the true primary neutrino vertex distribution. For further details, see text and
legend definitions in section 10.4.
135
contain explicit cuts on energy-related variables. However, there are implicit
cuts due to the starting requirement since high-energy background events are
more likely to leave traces in the veto regions. Events that pass the filter are
sent north for further processing (∼ 4, 700 MB/day)3 .
Figure 8.3 shows the distributions of reconstructed interaction vertices in
the (x, y)-plane, before and after the FSS filter. The upper row shows experimental data, while the second and third row illustrate the distributions for
−Eν /10 TeV and E−3 respectively. For expersimulated signal weighted to E−2
ν e
ν
imental data, color indicates the rate at the given level, while for signal the
color corresponds to the rate of signal events normalized to the total rate of
experimental data. The plots in the left column are generated using events
captured by a so-called minimum bias filter selecting every 1,000th event that
triggers the detector. The right column shows the results of a second round of
vertex reconstructions performed offline after the application of the filter and
transfer to the North. The whole reconstruction chain is re-applied for technical reasons. Random components in the chain (picking seed etc.) result in
that a few events are reconstructed to start outside of the border defined by the
filter. These are rejected at level 2 as described in section 10.4.2.
Similarly, figure 8.4 shows the distributions
of reconstructed interaction
vertices in the (z, r)-plane, where r = x2 + y2 is the geometrical radius in
the (x, y)-plane. Note the accumulation of events in the dense DeepCore array,
especially for the E−3
ν spectrum. Further, we observe a deficit at the location
of the main dust layer, discussed in section 5.2. This region is less sensitive to
optical light due to increased absorption and scattering for photons. A consequence of the dust layer is that the distributions slightly below -150 m has a
concentration of found vertices. This is due to events that interact in the dust
layer but remain unobserved until the track reaches the sensitive region below
the dust. The effect of the dust layer is clearly visible in figure 8.5, where the
projected distribution of the z-coordinate is shown, before (upper) and after
(lower) the FSS filter. Due to lack of processed simulated background data
at trigger level, the upper plot is missing the red solid line representing atmospheric muons seen in the lower panel. In the upper plot we show orange lines
representing the different signal spectra without cuts on true direction and true
interaction vertex position (these cuts are described in section 10.4). The peak
at z ∼ 500 m mainly consists of events starting above the instrumented region,
while the leftmost peak is due to truly upgoing events. Further, the upper
plot also show green lines representing the true primary neutrino interaction
position.
3 The
136
total allocation for all IceCube filters is ∼ 100 GB/day
Summary
In this chapter we introduced the FSS filter and discussed its implementation
including vetoes based on both the hit pattern of pulses in the detector as well
as cuts on the reconstructed interaction vertices. This filter is the corner stone
of the analysis presented in chapter 10 and enables point source searches to be
extended to low energies in the full SH.
137
9. Point Source Analysis Methods
“Look up at the stars and not down at your feet.
Try to make sense of what you see, and wonder about
what makes the universe exist. Be curious.”
Stephen Hawking
We look for point sources by searching for a spacial clustering of signal-like
events in a distribution of background events, uniform in declination bands.
We follow a procedure similar to that developed in [113, 145], where the final
sample is analyzed using an un-binned maximum likelihood function where
the events contribute individually, with a reconstructed position, reconstructed
energy and an estimate of the angular uncertainty.
This approach uses the event-by-event information including a precise estimate of the detector angular resolution. Further, the energy can be used as
a means to separate signal from the background, in particular if the signal
events are predicted to follow a slightly harder spectrum than that of atmospheric muons or atmospheric neutrinos. The event times can be used to study
neutrino bursts or periodic emission but in this analysis we focus on steady
sources of neutrino emission probing the time-integrated neutrino flux.
In this chapter we discuss the hypotheses and likelihood functions used to
characterize the background and signal samples. Further, we define the test
statistic used for the hypothesis tests applied in chapter 10.
9.1 Hypothesis Testing
The goal of the analysis is to say something about the presence of a signal
from low-energy neutrino point sources. We formalize these statements using
so-called hypothesis tests where we compare a null hypothesis H0 , without
the presence of signal, to an alternative hypothesis HS , including the signal
description. These tests are designed for discovery and will either accept or
reject the null hypothesis at some confidence level depending on the observed
experimental data.
139
p-value
TSobs
TS
Figure 9.1. The PDF of the test statistic TS for the null hypothesis H0 . Such PDF
can be constructed using thousands of pseudo-experiments where the directions of
the events in the experimental data sample have been scrambled in right ascension,
providing a background-like distribution of events, uniform in each declination band.
The one-sided p-value corresponds to the probability of obtaining a TS equal or higher
than TSobs in the observed data.
The data in this analysis is modeled using two different hypotheses:
• Null hypothesis H0 : The data is described by a distribution of mainly
atmospheric muons which is uniform in right ascension (R.A.).
• Alternative hypothesis HS : The data is described by a distribution of
mainly atmospheric muons which is uniform in R.A. plus a localized
source of astrophysical neutrinos.
The degree of confidence in a given hypothesis can be quantified using a socalled test statistic TS that compares the observed data with what is expected
under the null hypothesis. The TS is defined as the ratio of the likelihood
for the best fit of the parameters to the likelihood of the null hypothesis. It is
defined so that a large value indicates poor agreement with the null hypothesis.
The TS distribution is established by conducting 10,000 pseudo-experiments
of the null hypothesis where we scramble the directions of each event in the
experimental sample in right ascension and calculate the TS-value after each
scrambling. We further define a so-called p-value, the probability to observe
an outcome as large or larger given that the null hypothesis is true. This pvalue is illustrated in figure 9.1, where the PDF represents the distribution
of TS-values for the null hypothesis. Note that this figure only serves as an
illustration of the method. The actual PDFs for the data in this analysis look
quite different, see e.g. figure 9.4 and the discussion in section 10.6.2.
140
Hypothesis tests with injected signal are used as a way to establish the level
of signal the analysis is sensitive to. Hypothesis tests using the observed data
is used to calculate limits on the existence of signal if no signal was found,
and to establish the amount of signal found if present in the final sample. This
is further explained in section 10.6.1.
9.2 Likelihood and Test Statistic
The likelihood function L models the outcome of the experiment and is a
function of two fit parameters: the number of signal events nS and the spec−γ
tral index γ assuming an unbroken power-law spectrum (i.e., Eν ). The final
likelihood function is a sum over the N events in the final sample:
N nS nS
(9.1)
L(n s , γ) =
S(xi , σi , Ei ; xS , γ) + 1 −
B(δi ; Ei ) ,
N
N
i
where xi = (δi , αi ) (δ denotes declination, α R.A.), σi , and Ei are the position,
angular uncertainty and energy for each event i, xS is the position of a source
S with parameters nS and γ, and S and B represent the probability for an event
to be signal- or background-like, respectively. The likelihood is for simplicity
configured with an unbroken power-law. Broken spectra with exponential cutoffs are to some extent represented by spectra with large values of γ.
The signal PDF S consists of two separate parts: one modeling the spacial distribution as a two-dimensional Gaussian distribution centered on the
source position with a standard deviation σi , one modeling the calorimetric
information (covered in detail after equation 9.6):
S = S (|xi − xS |, σi ) × E(Ei , δi , σi ; γ),
(9.2)
where |xi − xS | is the space angle difference between the source and an event i
(see equation 7.8), and S i is defined as:
1 −
e
S (|xi − xS |, σi ) =
2πσ2i
|xi −xS |2
2σ2
i
.
(9.3)
The background PDF B also consists of two separate parts, a spacial S bkg
and a calorimetric Ebkg (covered in detail after equation 9.6):
B = S bkg (δi ) × Ebkg (Ei , δi , σi , γ),
(9.4)
where the spacial part is described by a spline fit, Pexp , to the full experimental data sample at final level, depending on the declination of each event
only. S bkg is easily described since IceCube has a uniform acceptance in right
ascension due to its location at the South Pole.
1
(9.5)
S bkg = Pexp (δ).
2π
141
Figure 9.2. The declination distribution of experimental data at final level is shown
in the black histogram. A spline fit to this histogram is shown as a red dashed line.
Further, we show the distributions of a number of signal hypotheses in blue. All
histograms are normalized so that the integral is equal to 1. The red line is used as the
spacial PDF Pexp .
Figure 9.2 shows the background spline Pexp for the final event sample
in red. This is fitted to the histogram of experimental data shown in black,
binned using 40 equidistant bins in sin δ in the range −1.0 ≤ sin δ ≤ 0.0. As a
comparison we also show the distributions for three different neutrino signal
hypotheses in blue.
To estimate the significance of an observation we define the test statistic TS
as the likelihood ratio with respect to the null hypothesis (nS = 0)1 :
nS S i
L(n̂S , γ̂)
ln 1 +
Wi − 1
=2
TS = 2 ln
(9.6)
L(nS = 0)
N S bkg,i
i
where Wi = Ei /Ebkg,i is the ratio of the calorimetric parts of the signal- and
background-probability, n̂S and γ̂ denote the best-fit values. Note that S i ,
S bkg,i , Ei , and Ebkg,i represent the distributions defined in equation 9.2 and
9.4, respectively, evaluated for an event i.
W is constructed by fitting a spline to the ratio of the calorimetric parts in
the three dimensions defined by the reconstructed energy E (‘Energy Proxy’),
declination δ, and angular uncertainty σ. The angular uncertainty was not
used in [145] but is included in this analysis to form a complete likelihood
1 The
142
factor 2 is included in concordance with Wilks theorem.
expression avoiding biases as discussed in [146]. The splines are functions
of the source spectral index γ and are recalculated for each signal hypothesis
considered in the likelihood.
Figure 9.3a and 9.3b show the background and signal distributions at final
level of the event selection as a function of the reconstructed energy and direction (for all σ). The binning used here represent the binning used to define
the spline. The third dimension, σ, is binned using the bin edges (0.0, 0.5, 1.0,
3.0, 5.0). Note that the distributions shown in figure 9.3a and 9.3b are scaled in
each declination bin so that the sum of bin-counts along the axis of the energy
proxy, equals one. This is also done for each distribution when forming the
ratio, to ensure that no extra spacial weights are applied other than the ones
from the spacial part S /S bkg .
When constructing the ratio, bins outside of the domain of experimental
data, but inside the signal domain, are mapped to the most signal-like value,
calculated as the 99% percentile of the ratios above 1. This is due to practical
reasons, when injecting signal MC events there is a certain probability to inject
events outside the region of experimental data, and results in fast evaluation of
the injected trials without having to redefine the background sample including
the signal for each injection. The impact of this procedure was shown to be
negligible. Vice-versa, for bins in the experimental data sample where there
are no corresponding signal estimation, the ratio is set to 1, i.e., the calorimetric part of the likelihood has no preference for either signal or background.
Figure 9.3c and 9.3d show spline fits to the ratio of signal over background
evaluated on a grid of 100 × 100 equal sized pixels in E and sin δ, respectively,
evaluated for E−2
ν and two of the bins used in the third dimension, σ.
The likelihood ratio in equation 9.6 is maximized2 with respect to nS and γ.
We only fit for positive values of nS , i.e., nS ≥ 0. Additionally, we constrain γ
to be in the range -1 to -6, seeded with the value -3.
The significance of the observation is quantified by comparing the best-fit
TS-value with the distribution from the null hypothesis, i.e., from data sets of
randomized events in right ascension. The latter correspond to the probability
of the observation occurring by chance, given the measured data sample.
It can be shown that the TS distribution, defined in equation 9.6, approximately follows a χ2 distribution with n degrees of freedom in the limit of infinite statistics. This is a consequence of the so-called Wilks theorem [147, 148].
In this analysis n ∼ 2 since both nS and γ are allowed to float freely. However,
the actual number of degrees of freedom is typically less mainly since energy
is not very helpful at low energies. This discrepancy is clearly seen in figure
9.4 showing the test statistics distribution for the null hypothesis (nS = 0) at
δ = −50◦ over 100,000 trials. The single sided p-value for an observed TSobs ,
see figure 9.1, can be found from this distribution. A χ2 distribution for n = 2
(‘Theory’) is shown as a dotted line. This theoretical line does not always
2 For
practical reasons we perform a minimization of −2 ln(L(n̂S , γ̂)/L(nS = 0)).
143
100
7.0
6.0
10−1
5.0
4.0
10−2
3.0
2.0
10−3
log10(Energy Proxy)
log10(Energy Proxy)
6.0
-0.8
-0.6
-0.4
-0.2
0.0
10−4
10−2
3.0
2.0
0.0
-1.0
10−3
-0.8
-0.6
-0.4
sin δ
(a) Experimental data
(b) Signal E−2
ν
102
-0.2
0.0
10−4
102
7.0
6.0
101
5.0
4.0
100
3.0
2.0
10−1
1.0
log10(Energy Proxy)
6.0
log10(Energy Proxy)
4.0
sin δ
7.0
0.0
-1.0
10−1
5.0
1.0
1.0
0.0
-1.0
100
7.0
101
5.0
4.0
100
3.0
2.0
10−1
1.0
-0.8
-0.6
-0.4
-0.2
0.0
10−2
sin δ
(c) Evaluated ratio spline (σ =
0.0
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
10−2
sin δ
0.75◦ )
(d) Evaluated ratio spline (σ = 2◦ )
Figure 9.3. The top row shows the background (left) and signal (right) distributions at
final level of the event selection as a function of the reconstructed energy and direction.
The signal is weighted to an E−2
ν spectrum. These plots are scaled in each declination
bin so that the sum of bin-counts along that axis of the energy proxy, equals one.
The bottom row illustrate the three-dimensional spline fits of the ratio of signal over
background, used in the calorimetric part of the likelihood. The left (right) plot shows
the spline evaluated in the bin including σ = 0.75◦ (σ = 2.0◦ ), respectively. The splines
are based on the ratio formed from the distributions in the upper row.
144
100
χ2 Fit
10
−1
χ2 Theory
Null hypothesis (nS = 0)
PDF
10−2
10−3
10−4
10−5
10−6
0
10
20
30
Test statistic T S
Figure 9.4. The test statistic distribution for the null hypothesis, nS = 0, evaluated at
δ = −50◦ , is shown in the histogram. A χ2 distribution for n = 2 (‘Theory’) is shown
as a dotted line. The dashed line shows a fit to the histogram using a combination of
a χ2 and a Dirac delta-function at TS = 0. The latter will be used to calculate the final
p-values for this particular declination. See section 10.6.1 for more details.
agree with the distribution of TS-values and often overestimates the height of
the distribution leading to a bias towards a higher p-value given an observation
TSobs . A procedure is developed to reduce this bias and will be presented in
section 10.6.1. It builds on fits of the observed TS distribution for a number of
declinations as illustrated by the dashed line shown in figure 9.4.
Summary
Hypothesis testing is a way to compare different hypotheses and provide a
quantitive way of rejecting or accepting models at pre-defined confidence levels. In this chapter we have introduced the hypotheses and test statistic used
in the analysis as well as the constituent likelihood PDFs, both spacial and
calorimetric. The application of the method is presented in chapter 10, including detailed TS distributions and a review of the actual number of degrees of
freedom in the likelihood fit.
145
10. A Search for Low-Energy Starting Events
from the Southern Hemisphere
“We don’t receive wisdom; we must discover it for ourselves after a journey
that no one can take for us or spare us.”
Marcel Proust
Classic IceCube point source analyses study the Northern Hemisphere (NH)
using the Earth as a filter, effectively eliminating up-going atmospheric muons
to provide a nearly pure sample of atmospheric neutrinos (misreconstructed
muons can be removed to a high degree). IceCube’s sensitivity is unrivaled
in this part of the sky as can be seen on the right hand side of figure 10.1
(sin δ 0). IceCube’s limits and sensitivities are shown as black dots and lines
and are calculated for an E−2
ν -spectrum. The results are computed for a joint
analysis between the classic through-going sample [149] and a starting track
sample, denoted Medium-Energy Starting Events (MESE), for energies above
100 TeV [150]. The former is using 4 years of data1 , while the latter is using 3
years2 . The event selection used for the through-going analysis is sensitive in
the entire NH and extends into the Southern hemisphere (SH), up to δ ∼ −5◦ ,
where the Earth and Antarctic glacier still provide an efficient filter.
In the main part of the SH3 , δ −5◦ , down-going atmospheric muon bundles4 constitute a significant background and it is very challenging to isolate
a pure neutrino sample. Instead the background is reduced by exploiting variables connected to event topology and energy loss pattern. Hence, the socalled through-going event sample mainly consists of atmospheric neutrinos in
the NH and atmospheric muons in the SH. Note further that since high-energy
neutrinos are absorbed in the Earth the effective area for neutrinos reaches its
maximum close to the horizon. The MESE sample utilizes the veto method
1 Using
data from the 40, 59, 79, and the first year of the 86-string configuration of IceCube.
data from the 79, and the first two years of the 86-string configuration of IceCube.
3 Southern hemisphere is defined as δ ≤ 0◦ .
4 Muon bundles consist of many low-energy muons that typically have a smoother energy loss
distribution while the energy loss of individual high-energy muons is dominated by stochastic
losses.
2 Using
147
Figure 10.1. IceCube muon-neutrino upper limits at 90% C.L. (black dots) shown for
38 pre-defined sources in the SH. Limits for the NH are not shown but are consistent
with the sensitivity [149]. Further, we show the sensitivities as a function of declination for an unbroken E−2
ν spectrum (black solid line), with hard cut-offs at 1 PeV (black
dashed line) and 100 TeV (black dashed-dotted line). The limits and sensitivities are
all calculated for the classic point source analysis in IceCube using through-going
muon tracks [149] in combination with a starting track sample [150]. The latter uses
three years of IceCube data and improves the limits in the SH (for δ ≤ −30◦ ) by a
factor of ∼ 5 − 20 (∼ 2 − 3) for fluxes ending at 100 TeV (1 PeV). Also shown are the
point source sensitivities (red lines) and upper limits (red crosses) from ANTARES
[151] using 4 years of data. Figure taken from [150].
developed for the so-called High-Energy Starting Events (HESE) analysis [3]
in which a significant number of astrophysical neutrinos was first observed.
However, the effective energy cut applied in the HESE analysis is lowered
to a level corresponding to ∼ 100 TeV5 . This sample improves the limits for
δ ≤ −30◦ relative to the through-going sample by a factor of ∼5-20 (∼2-3) for
fluxes with a so-called hard cut-off 6 at 100 TeV (1 PeV). Note that the MESE
sample only extends to a declination of −5◦ . The joint analysis is unchanged
with respect to the through-going sample above this declination [150]. The
cuts used to reduce the background in the SH also remove a significant part of
5 An
explicit cut is set to remove events with less than 1,500 p.e. in the detector.
spectrum is cut-off at this maximum value but otherwise follows an unbroken power-law.
6 The
148
the low-energy signal. The sensitivity is then significantly worse for spectra
predicting neutrinos of low-energy. This is clearly seen in figure 10.1 as we
apply a hard cut-off to the E−2
ν -spectrum. The dashed-dotted and dashed black
lines show the sensitivity for an E−2
ν -spectrum with a hard cut-off at 100 TeV
and 1 PeV, respectively.
The SH is of particular interest since it contains both the GC and the main
part of the Galactic plane, two of the most probable neutrino source candidates
(regions). The Low-Energy Starting Events (LESE) analysis described in this
chapter is a search for low-energy point sources of astrophysical neutrinos in
the SH. It focuses on neutrinos in the energy range from 100 GeV to a few TeV.
The analysis was made possible by the development of the Full Sky Starting
(FSS) data filter, described in chapter 8.
Figure 10.2 shows the central 90% energy ranges as a function of declination for event samples for the through-going analysis (upper) and LESE
analysis (lower), each for a number of different spectra ranging from hard to
soft. The distributions are shown for the final event selection of each analysis. The LESE analysis is the only IceCube analysis so far that can explore
the low-energy regime (100 GeV - few TeV) for muon-neutrinos, in the entire
SH. Further, as we will see in section 10.6.1, it provides the best IceCube point
source sensitivity all the way up to ∼ 10 TeV.
The energy threshold for the LESE analysis was lowered significantly utilizing veto techniques, i.e., by selecting events starting inside the detector volume as a way to distinguish the signal from the large background of atmospheric muons, see figure 10.3.
In this chapter we will present the event selection leading to the final sample
of 6191 neutrino-like candidate events and the following likelihood analysis
looking for a clustering of such events. We will present the results using the
first year of IceCube data from the completed array as well as an estimate of
the systematic uncertainties.
10.1 Analysis Strategy
The event selection focuses on identifying starting events, see figure 10.3,
building upon the stream of events that pass the FSS filter. Initially we apply
overall data quality cuts ensuring that we have enough charge and participating strings for directional reconstructions. Several different veto prescriptions
are applied in the subsequent steps. Further, a machine-learning algorithm is
applied that simultaneously analyzes several variables, creating a score based
on whether the events are signal- or background-like.
At the higher levels of selection, when the number of events is smaller,
more advanced and time-consuming reconstructions are performed. The final
analysis uses the standard IceCube point source likelihood method to look for
so-called hot-spots of neutrinos anywhere in the SH and additionally in a pre149
Figure 10.2. Signal distributions at final level in a Eν -sin δ plane. The figures show
central 90% energy ranges for various spectra. The classic through-going analysis is
shown in the upper panel, and the LESE analysis is shown in the lower panel.
150
defined list of candidate neutrino sources. Scrambled experimental data are
used as background for these searches.
Blindness
To avoid confirmation bias during development and optimization of the event
selection, we blind the R.A. of each event until the final analysis chain, including all cuts, has been fully developed and approved. Note that since IceCube
is located at the South Pole, we can use the azimuthal angle defined in detector local coordinates for each event without violating blindness. However, the
event timestamp needed to convert this position into R.A. is kept blind.
10.2 Experimental Data
This analysis uses data taken with the completed 86-string configuration of
IceCube, denoted IC86-17 , between May 2011 and May 2012.
IceCube DAQ divides data into so-called runs. If successful, each run contains ∼ 8 hours of data from the whole in-ice array and the IceTop air shower
array at the surface. Some runs are cut short due to temporary malfunctions
or during detector maintenance periods. The DAQ then automatically starts
a new run, potentially with a partial configuration, i.e., with one or several
DOMs or even entire strings removed. Since this analysis is strongly dependent on the presence and operation of the outermost layers of DOMs which are
used in the veto, we remove all partial runs with gaps in these critical regions.
This selection of so-called good runs is described in detail in section 10.2.2.
Several different samples of experimental data are defined. In particular,
we develop the whole event selection on the so-called burnsample.
10.2.1 Burnsample
The burnsample consists of a total of 12 runs grouped in sets of 3 consecutive runs (hence about 24 consecutive hours in each set), spread out evenly
throughout the data taking period. This dataset was defined at an early stage
of the analysis and consists of runs marked good by the detector monitoring
system. Note that all but one of these runs (118209) failed the good run selection (see section 10.2.2), but follow-up investigations showed no apparent
cause. Nonetheless, such runs were excluded from the final sample. The impact on the event selection is minimal at most, since each step was carefully
checked and verified against simulated muon background. A larger burnsample, denoted Burnsample-BIG, consisting of every 10th run (all runs with a run
number ending at 0), was defined at higher levels where more statistics were
needed.
7 The
‘1’ indicate that it is data from the 1st year of the final configuration. Subsequent years
are denoted IC86-2, IC86-3, etc.
151
(a) High-energy muon
at level 1 of the event selection
(b) Low-energy muon
at level 1 of the event selection
(c) High-energy signal event
at level 8 of the event selection
(d) Low-energy signal event
at level 8 of the event selection
Figure 10.3. Illustration of the different event topologies considered in the analysis.
The color coding of the hits are illustrative of the timing of the event with red meaning
earlier hits and blue later hits. The top row shows the light pattern in the detector from
two simulated atmospheric muon-bundle events. The bottom row shows the light
pattern from two simulated muon-neutrinos interacting inside the detector volume.
152
Table 10.1. Definition of the different experimental datasets used in the LESE analysis, presented in this thesis.
Dataset
IC86-1
Burnsample-BIG
Burnsample
Runs
[118175, 120155]
Runs ending with 0 in [118175, 120155]
118209-11, 118589-91, 118971-73, 119671-73
Livetime [s]
28435423 (∼ 329 d)
2652273 (∼ 31 d)
28801.27 (∼ 0.3 d)
10.2.2 Good Run Selection
All runs marked good by the detector monitoring system were used as a baseline for the good run selection. This list of runs was comprised by removing
runs containing either manual overrides due to e.g. calibration conditions or
‘Sweden camera’ runs (i.e., light in detector), runs with significant trigger and
filter rate variations or runs where very large parts of the detector were not
taking data.
As mentioned above, partial runs constitute a problem for analyses dependent on the construction of vetoes. If a dropped-out string is located in the veto
region, incoming muon events may leak in through the hole it creates, appearing starting inside the detector. To address this issue we manually checked the
status of the detector through a series of tests.
The rate of each run at level 5 (the livetimes of the runs were extracted at
level 2) is illustrated by a dot in figure 10.4. Runs with a deviation within ±5%
of the so-called running mean, constructed using a sliding window of 15 runs,
were used in a polynomial fit to determine the average rate. For this calculation
we divided the data into two separate regions clearly distinguishable in the
figure. The shift occurring at run 118494, in the end of July 2011, was caused
by a change in DOM firmware. The selected runs before (after) the shift were
fitted using a 2nd (4th) order polynomial. Runs outside ±5% of the fit results,
indicated with dashed lines in each region, were rejected (black dots), while
runs inside the regions were kept (gray dots).
Further we checked the individual detector configuration for each run making sure that the number of configured DOMs in the veto regions defined by
the FSS filter (5 top layers, outermost side veto layer), as well as in the full
in-ice array, was not lower than 97% of the deployed number of DOMs in each
region.
This run selection discards about 4 (11)% of the livetime (number of runs)
initially considered. The total good livetime is finally about 329 days out of
342 days initially considered. Figure 10.4 also shows the seasonal variations
in IceCube due to changes in the atmospheric temperature and pressure. This
has an impact on the development of CR air showers in the atmosphere as
discussed in section 3.4.
153
Figure 10.4. Rate at level 5 for each run in IC86-1 considered good by the detector
monitoring system. The dashed black lines show the level at ±5% relative to two fitted
polynomials, one in each of the two regions defined by run 118494 at the end of July
2011. Runs with a rate outside these lines are marked with black dots and are excluded
from the final likelihood analysis. Further we exclude runs where less than 97% of the
deployed DOMs are configured.
10.3 Simulated Data
The simulation datasets used throughout the event selection are presented in
table 10.2. The atmospheric muons are simulated using CORSIKA datasets
9622 and 10309. These are combined in all plots to provide enough statistics
at both low and high energies. The neutrino signal as well as the atmospheric
neutrinos were simulated using NuGen dataset 10602 containing muon neutrinos only. These are weighted to the desired spectrum using the scheme
presented in section 6.2.
10.4 Event Selection
The event selection is illustrated in figure 10.5 and consists of several consecutive steps of cuts and data processing. Each of these steps focuses on a
different task and is referred to as a level. These are discussed extensively in
the following sections.
All cuts up to level 5 were carefully studied to ensure that the sample remained as unbiased as possible towards any true direction on the sky or a
particular energy region. This is important to avoid an early bias towards
154
&/*'
(&**+,-
&'
(&)*+,-
!
"
#
)
(&*#.+,-
$
%!
Figure 10.5. The LESE event selection and analysis chain.
155
Table 10.2. Simulation datasets used in the analysis. CORSIKA events are generated
using a 5-component model where 5 different primary nuclei are injected according
to the specified generation spectrum. For details, see 6.1.1.
Generator
NuGen
CORSIKA
CORSIKA
a Proton
Type
νμ
5-component
5-component
Dataset
10602
9622
10309
Generation spectrum
E −2
E −2.6 a
E −2
Angles θ [◦ ]
[0, 180]
[0, 89.99]
[0, 89.99]
Energy [GeV]
[101 , 109 ]
[600, 105 ]
[105 , 1011 ]
spectrum is generated as E −2.65 .
well-reconstructed high-energy events or certain parts of the SH. Further, in
each step of the event selection we monitored the agreement between simulated background of atmospheric muons and the experimental data. Cuts were
only made for variables and in regions with good agreement.
The event selection was optimized for a neutrino signal described by a
−Eν /10 TeV .
power-law spectrum with an exponential cut-off at 10 TeV: E−2
ν e
Several other spectra were studied in parallel to verify the behavior of each
variable in the selection. The optimization
was made by maximizing the sig√
nificance approximated as S / B (unless stated otherwise), where S and B
represent the number of signal- and background-like events respectively. As
previously mentioned, we use burnsamples of experimental data as representatives for the full IC86-1 data sample.
Figure Legends
Throughout the event selection we will use plots to illustrate the variables used
to separate signal- and background-like events. Unless stated otherwise, the
plots contain the following:
• Experimental data shown in black dots and further illustrated by a gray
shaded area. The statistical uncertainty is illustrated with black error
bars. Note however, that the statistical errors are very small, in particular
for the first levels, why clear error bars may not always be visible.
• The total background simulation including both atmospheric muons μ
and atmospheric muon-neutrinos νμ are shown using a red solid line.
The individual contributions are shown in red dashed and dotted lines,
respectively. Further a red shaded area is used to indicate the spread in
the statistical uncertainty of the total background sample. Note that the
rate of atmospheric muon-neutrinos initially is several orders of magnitude lower than the rate of atmospheric muons. That is why atmospheric
neutrinos are only visible in the plots as we reach the higher levels.
−Eν /1 TeV (solid),
• Neutrino signal events are shown in blue lines: E−2
ν e
−Eν /10 TeV (dashed), E−3 (dotted). All simulated signal events shown
E−2
ν e
ν
are truly starting inside the instrumented detector volume (unless stated
156
otherwise). The containment criteria used is p = 0.8 and z ≤ 300 m, defined in chapter 8 (see figure 8.1). Further, events are truly down-going
(i.e., originate from the SH), interact through the CC interaction producing a charged muon, and are normalized to the total rate of experimental
data (black dots).
10.4.1 Level 1
The IceCube triggers described in section 5.4 are applied at level 1. We apply
three different triggers to data, each aimed at catching events of slightly different character: SMT-8, SMT-3, and StringTrigger. Several reconstructions
are applied online to the triggered data before it is sent through the filtering
process described in chapter 8.
10.4.2 Level 2
Events that pass the FSS filter are sent north via satellite. The processing and
filtering of subsequent levels is then performed offline, either on computers
at UW-Madison or in Uppsala. Note that all filters and reconstructions performed online are re-done offline as data arrive to the data farm. These new
reconstructions are used in the event selection from level 2. Due to the use
of random numbers in the reconstruction chains, offline reconstruction results
may differ somewhat from the results obtained online, at the Pole.
At level 2 we also reject events with an SPEFit2 reconstruction in the NH,
see figure 10.6 showing the distribution of reconstructed zenith angles after
the FSS filter. Note that the neutrino signal distributions in blue shows truly
downgoing tracks, hence the part of the distribution reconstructed as up-going
represents so-called mis-reconstructed events.
Level 2 Requirements
• Passed the FSS filter (see chapter 8),
• Reconstructed direction (SPEFit2) is in the SH (θ ≤ 90◦ ).
10.4.3 Level 3
Level 3 focuses on general data quality, removing events with very few hits
(too low charge for reconstructions), too few strings to resolve the azimuthal
direction, and events with bad reconstruction quality (high values of rlogl).
Further we cut on the z-coordinate of the finiteReco reconstruction of the neutrino interaction vertex. This cut significantly lowers the event rate and shows
strong separation between signal- and background-like events as can be seen
in figure 10.7a. The level 3 requirements are summarized in the list below.
The corresponding distributions are also shown in figure 10.7.
157
Figure 10.6. Zenith angles from the SPEFit2 reconstruction after application of the
FSS filter at level 2.
Level 3 Requirements
• z-coordinate of finiteReco interaction vertex ≤ 250 m,
• Total charge in TWSRTOfflinePulses ≥ 10 p.e.,
• Number of hit strings in TWSRTOfflinePulses ≥ 3,
• rlogl of SPEFit2 ≤ 15.
Note that the cut on rlogl only removes the worst tail of the distribution.
The variable correlates strongly with energy and the cut was chosen to keep a
large part of the low-energy (E < 1 TeV) events, see figure 10.7e.
10.4.4 Level 4
Level 4 focuses on the improvement of the angular resolution, which is crucial
for the point source likelihood where we search for a clustering of events. It
also includes variables that exploit the difference between signal- and backgroundlike events. The plots in figure 10.8 show the pointing resolution as a function
of true neutrino energy before (upper row) and after (lower row) level 4, for an
−Eν /10 TeV signal spectrum. The black lines indicate the median in each
E−2
ν e
energy bin, showing a clear improvement especially at the lower energies,
Eν 300 GeV (figure 10.8b and 10.8d). The global median (all events) is 3.1◦
after level 4 compared to 4.3◦ before, i.e., an improvement of roughly 30%.
158
(a) z-coordinate of reconstructed
interaction vertex
(b) Total charge of
TWSRTOfflinePulses
(c) Number of hit strings in
TWSRTOfflinePulses
(d) rlogl of SPEFit2
(e) rlogl of SPEFit2 as a function
of primary neutrino energy
Figure 10.7. All plots show level 3 variables before level 3 cuts are applied. The
plot on the bottom row shows the dependence of rlogl on the primary neutrino energy
Eν . As expected, higher energy events are easier to reconstruct and hence have lower
rlogl values. The cut at 15 mainly removes low-energy events with a bad angular
reconstruction.
159
(a) After level 3
(b) Zoom: After level 3
(c) After level 4
(d) Zoom: After level 4
Figure 10.8. True space angle resolution distribution before (upper row) and after
(lower row) application of the level 4 cuts. Plots show ψTrue as function of the primary
−Eν /10 TeV -spectrum. The black line goes through the
neutrino energy Eν for an E−2
ν e
median of ψTrue in each energy bin. The median over all energies is 3.1◦ (4.3◦ ) after
(before) level 4, hence an improvement with roughly 30%.
160
Several variables related to the pointing accuracy of the events are used
at this level, e.g., σCramer−Rao defined in section 7.3 and the length between
the reconstructed interaction vertex8 and the COG of the 25% latest pulses of
TWSRTOfflinePulses denoted ‘Length fR-COG4:4 ’. The variables σCramer−Rao
and ‘Length fR-COG4:4 ’ are shown in figure 10.9. The latter is proportional to
the lever arm of hits in the detector and strongly correlates with the true angular uncertainty, see figure 10.9d. Further, we introduce so-called Δ-angles, the
space angle between two reconstructions calculated using equation 7.8. Δ1 denotes the space angle between the SPEFit2 reconstruction and a line that goes
from the COG of the 25% earliest (in time) pulses to the COG of the 25% latest
pulses (using TWSRTOfflinePulses). Δ3 denotes the angle between the SPEFit2 reconstruction and a line starting at the reconstructed interaction vertex
and ending at the COG of the 25% latest pulses (using TWSRTOfflinePulses).
The Δ-angles are illustrated in figure 10.10.
We also introduce the so-called Punch Through Point (PTP) as the point
where the SPEFit2 track reconstruction penetrates an imaginary outer border
of IceCube defined using p = 1 (see figure 8.1) and z ± 500 m. The distance
between this PTP point to the earliest hit DOM in TWSRTOfflinePulses defines one of the variables used. The idea behind this variable is that short
distances indicate activity close to the outer borders of the instrumented volume. This may indicate that a muon track entered from the outside leaking
through the initial vetoes. This variable is divided into two regions, one for
PTP points where the z-coordinate is equal to 500 m, i.e., where the extension
of the infinite track reconstruction passes through the topmost DOM layer of
IceCube and one where it is located below 500 m, i.e., where the infinite track
reconstruction enters the detector through the outermost side layer9 . The final
cut is the same for both of these regions, but the distributions are significantly
different, why they are displayed separately in figure 10.11.
We introduce and make cuts on the two variables denoted ‘ZTravel’ and
‘Accumulation’, illustrated in figure 10.12. ‘ZTravel’ is defined as the mean
deviation of the z-coordinate of all hit DOMs to the mean value of the zcoordinate of the first quantile in time, z̄q1 :
ZTravel =
n
zi − z̄q
n
i
where
z̄q1 =
q
zi
i=1
q
.
1
,
(10.1)
(10.2)
8 Calculated
using the simple finiteReco algorithm as described in section 7.4 and illustrated in
figure 7.8.
9 Since all tracks are reconstructed to be down-going, no events will enter through the bottom
layer, i.e., no tracks will have a PTP located at z = −500 m. Further, due to the definition of the
PTP, no events can have z > 500 m or z < −500 m.
161
and q is the total number of hits in the first quantile. ‘ZTravel’ is constructed
using the pulses in TWSRTOfflinePulses. This variable is highly correlated
with the zenith angle of the events and can be described as its upward tendency.
To avoid cutting away signal from a particular part of the SH we therefore
apply a soft cut. The variable ‘Accumulation’ is defined as the time until 75%
of the total charge in TWSRTOfflinePulses has been collected.
In figures 10.9 - 10.12 all variables used at level 4 are shown together with
their correlation with the true space angle resolution. Further, all plots show
−Eν /10 TeV
the distribution of events from a signal simulation weighted to E−2
ν e
and normalized to experimental data. All plots are constructed using the containment criterion in ‘Figure Legends’ in section 10.4. The cuts made at level
4 are summarized below. They select events with a small true angular uncertainty, at the same time avoiding variables that are strongly correlated with the
true neutrino energy.
Level 4 Requirements
• σCramer−Rao ≤ 10◦ (defined in section 7.3),
• Length fR-COG4:4 ≥ 100 m.,
• Δ1 ≤ 40◦ ,
• Δ3 ≤ 20◦ ,
• Distance between first hit and the PTP ≥ 200 m (for PTP-z < 500 m and
PTP-z = 500 m),
• ZTravel ≥ −200 m,
• Accumulation ≤ 2, 000 ns.
162
(a) Reconstruction uncertainty
proxy σCramer−Rao
(c) Reconstruction uncertainty proxy
‘Length fR-COG4:4 ’
(b) ψTrue as a function of σCramer−Rao
(d) ψTrue as a function of
‘Length fR-COG4:4 ’
Figure 10.9. Level 4 variables - The upper (lower) row shows the distribution of
σCramer−Rao (‘Length fR-COG4:4 ’). The plots in the left column show the distribution of the events prior to level 4. The right column shows the variables as function
of the true space angle ψTrue , color indicating the rate of signal events weighted to
−Eν /10 TeV and normalized to experimental data.
E−2
ν e
163
(a) Reconstruction uncertainty proxy Δ1
(b) ψTrue as a function of Δ1
(c) Reconstruction uncertainty proxy Δ3
(d) ψTrue as a function of Δ3
Figure 10.10. Level 4 variables - Δ-angles. The upper (lower) row shows the
distribution of Δ1 (Δ3 ). The plots in the left column show the distribution of the
events prior to level 4. The right column shows the variables as function of the true
−Eν /10 TeV
space angle ψTrue , color indicating the rate of signal events weighted to E−2
ν e
and normalized to experimental data.
164
(a) PTP-z < 500 m
(b) ψTrue dependence (PTP-z < 500 m)
(c) PTP-z = 500 m
(d) ψTrue dependence (PTP-z = 500 m)
Figure 10.11. Level 4 variables - PTP-based variables. The distance between first
hit and the PTP for events with PTP-z < 500 m (PTP-z = 500 m) is shown in the
upper (lower) plots. The left column shows the distribution of the events prior to level
4. The right column shows the variables as function of the true space angle ψTrue ,
−Eν /10 TeV and normalized to
color indicating the rate of signal events weighted to E−2
ν e
experimental data.
165
(a) ZTravel
(b) ψTrue as a function of ZTravel
(c) Accumulation
(d) ψTrue as a function of Accumulation
Figure 10.12. Level 4 variables. The upper (lower) row shows the distribution of
ZTravel (Accumulation). The plots in the left column show the distribution of the
events prior to level 4. The right column shows the variables as function of the true
−Eν /10 TeV
space angle ψTrue , color indicating the rate of signal events weighted to E−2
ν e
and normalized to experimental data.
166
10.4.5 Level 5
In level 5 we focus on and apply additional veto requirements for incoming
muons. We also apply a cut on the total charge of direct pulses in TWSRTOfflinePulses using category C (see table 7.1). The events are reconstructed
using the advanced likelihood-based finiteReco algorithm described in section
7.4. Cuts are then applied to the resulting interaction vertex, as well as to
the likelihood ratio between a starting and infinite track hypothesis (LLHR
Starting Track), see list of requirements below. Figure 10.13 shows the distribution of some of the variables used at level 5, including the ‘LLH Veto’ and
‘Causality Veto’ discussed below. Level 5 also includes a veto based on the
possible coincidence between pulses in the in-ice detector and pulses recorded
by the surface air-shower array IceTop. This is further discussed in the section
‘IceTop Veto’ below.
LLH Veto
This veto algorithm is based on the calculation of rlogl for early pulses in
a cylindrical region surrounding the track reconstruction SPEFit2, on the incoming side, see illustration in figure 10.14. The calculation is performed
on a pulse series of uncleaned hits considering pulses on the outermost string
layer of IceCube only. The idea is that neutrinos interacting inside the detector
would only have uncorrelated noise hits in this region while events impinging
from the outside may have left faint traces in the form of individual SLC hits.
The radius of the veto cylinder was optimized in steps of 50 m, by considering the number of remaining
signal S in relation to the number of background
√
events B through S / B. The optimal value of 350 m was chosen for the final
cut.
The rlogl values from the LLH Veto are presented in figure 10.13e. The
bin corresponding to a value above 19 is added manually and represent events
without pulses in the veto region (i.e., where the calculation of rlogl failed).
Events with a rlogl less than 18 are rejected.
Causality Veto
The Causality Veto algorithm investigates whether potential noise hits in two
distinct veto regions (one top veto and one side veto), are causally connected to
hits in the ‘active’ region of the detector. This veto does not explicitly rely on
a track reconstruction but investigates the causal (time and space) connection
between hits in the veto region and the first HLC hit in a reference volume. If
a causal connection exists the hits are expected to line up approximately along
the so-called light-cone describing a particle traveling at the speed of light.
PDFs are established for two hypotheses. The background hypothesis (B)
is described by experimental data using the ‘Burnsample-BIG’, and the signal hypothesis (S ) is described by a simulated signal sample weighted to
−Eν /10 TeV . The likelihood ratio between these two hypotheses is calcuE−2
ν e
167
(a) xy-plane of reconstructed interaction
vertex, showing experimental data
(b) xy-plane of reconstructed interaction
−Eν /10 TeV
vertex, showing signal E−2
ν e
(c) z-coordinate of reconstructed
interaction vertex
(d) LLHR Starting Track
(e) rlogl from the LLH Veto
(f) Total charge of direct
pulses of category C
Figure 10.13. Level 5 variables.
168
Figure 10.14. LLH Veto: The rlogl value is calculated for pulses located in a cylinder
around the track reconstruction, on the incoming side. The idea is that lower rlogl
values indicate some coincidence with hits in the veto region which may be interpreted
as due to a particle entered from outside, leaking through previous vetoes.
Figure 10.15. Causality Veto: PDFs are calculated based on the causality of hits in a
veto region (dark gray box) relative to the first HLC in a reference, so-called ‘active’
volume. Two different veto topologies are used. The top veto (left) uses the twelve
top DOM layers of IceCube while the side veto (right) consists of the two outermost
string layers of IceCube excluding all DOMs used in the top veto.
169
lated for all events and is used to distinguish between signal- and backgroundlike patterns in the detector.
The top veto consists of the twelve (12) top DOM layers of IceCube (corresponding roughly to the finiteReco cut at level 3). The side veto consists
of the two outermost string layers of IceCube, excluding all DOMs used in
the top veto, i.e., these veto regions are completely separated in space. Further, we define a reference so-called ’active’ volume as the complement of the
DOMs considered in each veto. Causality is studied for hits in the veto regions
in relation to the first HLC hit in the corresponding ’active’ volume, denoted
reference hit. Figure 10.15 shows an illustration of the veto construction.
The top veto is only applied to events that enter through the top of IceCube
(PTP > 300 m). Similarly, the side veto is only applied to events that enter through the side (PTP ≤ 300 m). The background and signal PDFs are
constructed following the same division. Note that there is an implicit dependence on the track reconstruction, since the PTP is calculated using SPEFit2.
The PDFs for signal events are constructed from events that are truly downgoing (SPEFit2) and that have a reconstructed vertex inside the region defined
by p = 0.8 and z ≤ 300 m.
The distributions of veto hits relative to the reference hit, expressed in the
time difference δt and spacial distance δr are shown in figure 10.16. These
quantities are defined for each pulse i (note the sign of δt where a negative
value identifies early veto hits):
(10.3a)
δr = (xi − xref )2 + (yi − yref )2 + (zi − zref )2 ,
δt = ti − tref .
(10.3b)
Each plot in the figure also shows the light-cone as two solid black lines converging at (δt , δr ) = (0, 0), the location of the reference hit. Since incoming
background-like events cause early hits that are consistent with the speed of
light with respect to the reference point, these are expected to cluster close to
the light-cone line for δt < 0. On the opposite side δt > 0 we see traces of particles leaving the detector through the side veto as well as faint back-scattered
light produced close to the neutrino interaction vertex for both vetoes. The
edge at -4 μs is caused by the different pre-trigger window definitions used for
the two main triggers, see section 5.4.
A so-called box cut is used to select hits to use in the likelihood calculation,
see black dashed borders in figure 10.16. Note that the boxes applied are different for the side and top veto10 . The shape is determined manually to extract
regions where possible veto hits are located, at the same time avoiding vetoing
events by coincidence, this is particularly important for the tracks leaving the
detector through one to the side veto layers. A band is clearly visible in the
top veto box is defined as: δt ≤ 0 ns, 120 < δr < 850 m and −150 m < δr + cδt < 450 m.
The side veto box is defined as: δt ≤ 0 ns, 200 < δr < 600 m and −150 m < δr + cδt < 450 m.
10 The
170
region roughly between 200 m and 1,000 m. This corresponds to the range of
DOM-to-DOM distances between DOMs in the veto region and DOMs used
as reference points, for each veto respectively.
PDFs
The construction of the PDFs is challenging, especially due to the limited
statistics in the signal sample. Instead of constructing multi-dimensional PDFs
from the information in the boxes we construct several one-dimensional PDFs.
We define the so-called decorrelated distance δr and decorrelated time δt by a
rotation in the two-dimensional space defined by δr and δt .
⎞ ⎛ ⎞
⎛ ⎞ ⎛
⎜⎜⎜δr ⎟⎟⎟ ⎜⎜⎜cos ω − sin ω⎟⎟⎟ ⎜⎜⎜δr ⎟⎟⎟
⎟⎟ · ⎜⎜ ⎟⎟ ,
⎜⎜⎝ ⎟⎟⎠ = ⎜⎜⎝
(10.4)
sin ω cos ω ⎠ ⎝ δt ⎠
δt
where ω = tan−1 (1/c) is the angle between the vertical axis and the line representing an incoming particle
√ traveling at c. We also define the so-called ‘Distance offset’ (d) as d = δr + cδt , with the corresponding PDFs pd . Further
we count the number of pulses within the boxes and define the corresponding
PDFs denoted pN .
PDFs are established for both signal and background-like events. They
are constructed by fitting splines to the variable distributions presented in the
two upper rows of figure 10.17 (top veto) and 10.18 (side veto). The total
background simulation is shown using a red solid line while the red shaded
area indicates the spread due to its statistical uncertainty. Atmospheric muons
μ are shown as red dashed lines and atmospheric muon neutrinos νμ are shown
as red dotted lines. Since the distributions for different signal spectra (blue
−Eν /10 TeV ,
lines) do not differ much from each other, we use one model, E−2
ν e
to represent generic signal events.
Likelihood Ratio
The likelihood of an event to be described by signal or background is calculated for each of the four PDFs used, for both the top and side veto. The
final likelihood ratio, denoted ‘Causality Veto LLHR’ (R), is defined in equation 10.5, as the product of individual likelihood ratios between the signal and
background hypotheses. The resulting ratios for the top and side veto are presented in the bottom rows of figure 10.17 and 10.18, respectively. Value 13 is
reserved for events without hits in the veto region, while -13 is reserved for
events without a reference hit in the fiducial volume.
N
N
N
pN (N|S ) pr (δr |S ) pt (δt |S ) pd (d|S )
·
·
·
,
(10.5)
R = log
pN (N|B) i pr (δr |B) i pt (δt |B) i pd (d|B)
where the first ratio is defined once for each event given the total number of
hits N and pr and pt are the PDFs for δr and δt , respectively. Large positive
171
(a) Top veto: Experimental data
(b) Side veto: Experimental data
(c) Top veto: Simulated atmospheric μ
(d) Side veto: Simulated atmospheric μ
−Eν /10 TeV
(e) Top veto: Signal E−2
ν e
−Eν /10 TeV
(f) Side veto: Signal E−2
ν e
Figure 10.16. ‘Causality Veto’ distributions for top veto (left column) and side veto
(right column). The box cuts used in each veto is indicated as the dashed contours.
172
Figure 10.17. The distributions used to construct the PDFs used in the ‘Causality Top
Veto’ are shown in the two upper rows. The bottom row shows the distribution of the
corresponding likelihood ratio.
173
Figure 10.18. The distributions used to construct the PDFs used in the ‘Causality Side
Veto’ are shown in the two upper rows. The bottom row shows the distribution of the
corresponding likelihood ratio.
174
Figure 10.19. Illustration of the IceTop Veto.
values of R correspond to signal-like events, while large negative values correspond to background events. Cuts are made at 0 for both the top and side
vetoes, excluding events with a value ≤ 0.
IceTop Veto
A so-called ‘IceTop Veto’ is designed to study coincidence between hits in the
in-ice detector and hits in the IceTop air-shower array, see figure 5.2. For each
pulse in IceTop, the algorithm calculates the time and lateral distance relative
to the shower axis of a moving shower plane defined (curvature not included)
by the direction and timing of the event track reconstruction (SPEFit2) performed on hits in the in-ice detector, see illustration in figure 10.19.
A true correlation will occur on-time, i.e., when the shower plane passes
IceTop, and is only expected for vertically downgoing events that pass through
or close-by the IceTop array. We use the lateral distance between each IceTop
175
pulse and the reconstructed shower axis (track reconstruction) as an indicator
for hit probability.
The integrated number of IceTop pulses (SLC and HLC) is shown as function of time to the shower plane for various cuts on the lateral distance in
the plots of figure 10.20. A sharp peak of on-time pulses is clearly seen for all
lines and occurs on top of a general noise hit distribution with tails formed due
to the mismatch in overlapping trigger windows as discussed for the ‘Causality Veto’. The peaks consist of pulses very close to the shower axis. In fact we
can limit the region of interest to lateral distances below ∼ 400 m without loosing efficiency for discarding coincident events. Further, by only considering
pulses below a certain lateral distance, we can lower the random coincidence
rate. To illustrate this, we show the number of IceTop pulses as a function of
the time and lateral distances in the upper plot of figure 10.21. An on-time
region is defined as the range from -400 ns to 100 ns and an off-time region
is defined prior to this window in time, between -2,500 ns and 2,000 ns. The
lower plot of figure 10.21 shows the projection of the time distribution on the
horizontal axis for the pulses in the on-time (black) and off-time (gray) region
respectively.
The final analysis considers pulses within 400 m of the shower axis. An
event is discarded if it has 1 or more pulses in the on-time region. This criterion removes 0.77% of the events in the experimental data sample. The random coincidence rate was determined by imposing the same criterion on the
off-time region leading to an expected loss of about 0.13% for signal events
−Eν /10 TeV . This small loss of signal neutrinos is addressed in
weighted to E−2
ν e
section 10.8, discussing the systematic uncertainties of the analysis.
Level 5 Requirements
• (x,y)-coordinate of advanced finiteReco interaction vertex is inside the
polygon defined by p = 0.9 (shown with a black line in figures 10.13a
and 10.13b),
• z-coordinate of advanced finiteReco interaction vertex ≤ 150 m,
• LLHR Starting Track ≥ 5 (see section 7.4),
• rlogl from LLH Veto ≥ 18 (see ‘LLH Veto’ above),
• Total charge of direct pulses (using TWSRTOfflinePulses and category
C, see table 7.1) ≥ 3 p.e.,
• LLHR Causality Veto TOP/SIDE > 0 (paired with z-coordinate of PTP,
see ‘Causality Veto’ above),
• No coincident hits in the IceTop air-shower array (see ‘IceTop Veto’).
176
3000
Number of IceTop pulses
off-time
on-time
All
d ≤ 2000 m
d ≤ 1500 m
d ≤ 1000 m
d ≤ 500 m
2500
2000
1500
1000
500
0
-10
-5
0
5
10
15
20
Time to the shower plane [μs]
Number of IceTop pulses
500
400
300
d ≤ 500
d ≤ 400
d ≤ 300
d ≤ 200
d ≤ 100
m
m
m
m
m
200
100
0
-0.4
-0.3
-0.2
-0.1
0.0
0.1
Time to the shower plane [μs]
Figure 10.20. Time to the shower plane in the IceTop Veto. The upper and lower plot
show the number of IceTop pulses as a function of the time to the shower plane for a
number of different lateral distances. The sharp peak of on-time pulses is clearly seen
for all lines and occurs on top of a general noise hit distribution. The peak consists
of pulses very close to the shower axis, in fact we can limit the region of interest to
lateral distances below ∼ 400 m without loosing efficiency for discarding coincident
events. In this way we can also lower the random coincidence rate. The gray shaded
areas show the definitions of the on-time (off-time) region. Note that the time range
of the lower plot corresponds to the definition of the on-time. Events with at least one
IceTop pulse on-time within 400 m of the shower axis are rejected.
177
15
102
10
5
101
0
-5
-10
0
500
1000
1500
2000
2500
Number of IceTop pulses
Time to the shower plane [μs]
103
20
100
Lateral distance to track [m]
Number of on-time IceTop pulses
800
on-time
off-time
700
600
500
400
300
200
100
0
0
500
1000
1500
2000
2500
Lateral distance to track [m]
Figure 10.21. The upper plot shows the number of IceTop pulses as a function of
the time and lateral distances. A band consisting of on-time pulses is clearly seen on
the left side for distances up to about 700 m. The upper plot shows the projection of
the time distribution on the horizontal axis for the pulses in the on-time (black) and
off-time (gray) region respectively.
178
10.4.6 Level 6
In level 6 we use a so-called machine learning algorithm to exploit the separation between signal- and background-like events for variables that might be
nested. The algorithm used is a combination of a so-called Boosted Decision
Tree (BDT) [152] and a so-called random forest. In general, we will refer to it
as a BDT throughout this thesis.
The BDT algorithm simultaneously analyzes all input variables in a multidimensional space, producing a final score for each event indicating whether
it is signal- or background-like. The BDT is developed using a training sample consisting of both signal and background events. In this thesis, we use
half of the experimental data in ‘Burnsample-BIG’ as background and half
−Eν /10 TeV as signal.
of all simulated muon-neutrino events weighted to E−2
ν e
The procedure used consists of two consecutive BDTs. The initial one uses
all 22 variables considered, while the final only uses the 14 most important
according to the results of the initial training. This is done to reduce the dimensionality of the phase space of the BDT as a mean to lower the risk for
so-called overtraining, see discussion below.
Decision trees are classifiers that use a binary tree structure, splitting the
data sample in several consecutive steps by applying linear cuts on the participating variables at each split point, so-called node. Each cut is chosen to
increase the signal-to-background separation of the sample. The splitting continues until a stop criterion is fulfilled. Each resulting so-called end node is
classified (scored) as either signal- or background-like based on whether it is
dominated by events from the simulated signal or background sample. The
stop criterion consists of several conditions. Either the specified maximum
so-called depth (number of splits) is reached, a node is completely pure (i.e.,
only signal or background events remain), or the best cut results in a node with
less than the specified minimum number of events remaining.
A BDT consists of many consecutive decision trees. Typical for a classic
BDTs is that a boosting weight is applied to all events when moving from tree
to tree. These weights are determined by whether the previous tree classified
the events correctly or not. Characteristic for random forests are instead processes of randomization. In the implementation used in this thesis, we allow
each node to pick 5 (10) random variables out of the 22 (14) provided in the
initial (final) BDT. Further, each tree is only allowed to use 40% of the data in
the training samples, also selected randomly.
The final BDT score for each event is calculated as the weighted average
score from the individual trees (including boost weights). The score ranges
from +1 to -1, indicating whether the event was signal- or background-like
respectively.
BDTs can easily be overtrained, either by being tuned to classify every single event in the training sample correctly, i.e., so that the division between
signal and background is perfect, or by being tuned to find differences be179
tween the imperfect simulation and experimental data instead of differences
in the physics parameters, so-called MC overtraining. To verify the training
we therefore use the other half of each data sample, not used in the training,
as a dedicated testing sample. A good agreement between the BDT scores
of the training and the testing sample is expected if overtraining is low. We
quantify this by making a so-called Kolmogorov-Smirnov (KS) test, providing
a p-value based on the statistical probability that two compared samples are
drawn from the same distribution. In general, a smaller KS p-value indicates
greater overtraining. MC overtraining will not show up in a KS test but can be
studied visually. If MC overtraining is present it is typically characterized by a
small excess of events in the experimental data sample relative to the number
of simulated events, in the region where the signal and background samples
meet. Introducing random components, as described above, is another tool to
minimize the overtraining of the sample. Further, we apply so-called pruning
where the deeper nodes of each tree are removed 11 .
Initial BDT
The initial BDT uses 22 candidate variables that all show good separation
power on visual inspection. These are shown in appendix A and presented in
the list below. We made a large scan of hundreds of different combinations
of BDT settings for the number of trees, node depth, prune strength, etc. The
BDT that provided the highest p-value from the KS test ( 10−3 ) was selected.
Settings used: 450 trees, each with a node depth of 4. The boost parameter
was set to 0.3 and the pruning strength to 20.
List of initial variables:
Initially we considered 22 variables for the BDT. These are presented in the
list below and shown in appendix A. Variables based on the reconstructed
interaction vertex are calculated using the advanced algorithm described in
7.4. COG is calculated using TWSRTOfflinePulses. PTP is calculated using
SPEFit2.
• Reconstructed interaction vertex:
– z-coordinate,
– distance in xy-plane r = x2 + y2 ,
– LLHR Starting Track (defined in section 10.4.5),
– rlogl of finiteReco fit (defined in section 7.4).
• LineFit reconstruction (see section 7.2.1):
– Magnitude of vLineFit ,
– x-component of vLineFit ,
– y-component of vLineFit ,
11 While the nodes close to the first node do most of the classification, deeper nodes are important
for classifying less frequent types of events. These deeper nodes are also more sensitive to
statistical fluctuations in the training sample
180
– z-component of vLineFit .
• SPEFit2 reconstruction (see section 7.2.2):
– zenith (θ),
– rlogl,
– z-coordinate of PTP,
– PTP distance in xy-plane r = x2 + y2 .
• Pulse-map variables (based on TWSRTOfflinePulses):
– Total charge,
– Accumulation (see section 10.4.4),
– ZTravel (see section 10.4.4),
– Track hit smoothness (see below),
– Total charge of late pulses (tres > 15 ns),
– Number of direct pulses (category D, see table 7.1).
• Miscellaneous
– Distance between first hit and the PTP (see section 10.4.4),
– Length fR-PTP, i.e., distance between the interaction vertex and the
PTP,
– z-coordinate of COG,
– σz of COG, i.e., RMS12 of the z-coordinate of COG.
The track hit smoothness is a measure of how smoothly the pulses (TWSRTOfflinePulses) are distributed along the track reconstruction (SPEFit2).
The algorithm only considers the first pulse of every DOM within a cylinder
surrounding the track with radius of 150 m. The distance from the first hit to
each of the following hits is projected onto the tracks and ordered in time. The
resulting vector is compared with a uniform hit distribution. Tracks with all
hits clustered in the beginning or end of the track receive a value +1 and -1
respectively, while a perfectly uniform distribution corresponds to 0.
Results from the initial BDT:
The 14 most important variables of the initial BDT are shown in table 10.3
ranked starting from 1 (most important). Importance is a measure on how
often a variable is used to discriminate between signal and background events.
Final BDT
The 14 most important variables of the initial BDT are used in the final BDT,
see table 10.3. Again we made a scan of hundreds of different combinations
of BDT settings. The BDT output with the highest p-value from the KS test
( 10−1 ) was selected. Settings used: 350 trees, each with a node depth of 6.
The boost parameter was set to 0.3 and the pruning strength to 20. The result is
shown in figure 10.22 and 10.23a where both training and testing samples have
been merged. The plots follow the figure legend introduced in section 10.4.
12 Root
Mean Square (RMS)
181
Table 10.3. Variable importance of the final BDT
Importance Ranking
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Variable Name
Charge of late pulses
Total charge
Hit distribution smoothness
σz of COG
z-coordinate of finiteReco
z-coordinate of COG
Distance between earliest hit DOM and PTP
Magnitude of vLineFit
Zenith (θ)
finiteReco xy-distance
finiteReco rlogl
rlogl
z-coordinate of PTP
Number of direct pulses (D)
Figure 10.22 also show a comparison between experimental data and the total
simulated background (atmospheric muons and atmospheric neutrinos) in the
bottom panel. Figure 10.23b, 10.23c, and 10.23d show the BDT score divided
into three bands using the zenith angle. The atmospheric muons are generally
overestimated in the region 0.66 < cos(θ) ≤ 1.00 and underestimated in 0.00 <
cos(θ) ≤ 0.33, but the overall shape is consistent with the experimental data up
to statistical fluctuations.
The transition from atmospheric muons to atmospheric neutrinos is clearly
visible in the plots in figure 10.23, and occurs at BDT scores slightly above
0.5. The transition is most clear in the zenith band 0.00 < cos θ ≤ 0.33, i.e., for
horizontal and close to horizontal events with θ ∼ 70◦ − 90◦ .
The KS p-value is 0.73 for the signal sample and 0.99 for the background
sample. These values translate into a very high similarity between the testing
and training samples, respectively. No visual discrepancies are observed other
than statistical. Further, no hints of overtraining were found in the region
where the signal and background distributions meet, BDT score ∼ 0.0.
The correlations between the input variables are shown in figure 10.24 for
experimental data (left) and signal (right), respectively. There is a clear correlation between a couple of variables, e.g., ‘Total charge’ and ‘Charge of late
pulses’, ‘finiteReco rlog’ and ‘rlogl’ (SPEFit2), and ‘zenith (θ)’ and ‘z-coord.
of PTP’. We therefore tested a new BDT by removing one variable in each pair
of strongly correlated variables. The results got slightly worse, i.e., lower KS
182
0.00 < cos θ ≤ 1.00
14
mHz per bin
12
10
8
6
4
Data/MC ratio
2
0
1.2
1.0
0.8
-1.00
-0.50
0.00
0.50
1.00
BDT Score
Figure 10.22. BDT scores from the final BDT with events from the whole SH. The
plots follow the figure legend introduced in section 10.4. The bottom panel show
a comparison between experimental data and the total simulated background (atmospheric muons and atmospheric neutrinos).
p-values, indicating higher degree of overtraining. Therefore, we decided to
use the BDT score from the 14-variable BDT as the final one in the analysis.
The final BDT cut was optimized in relation to the final sensitivity of the
analysis. The sensitivity for signal, described by an E−2
ν spectrum with a hard
cut-off at 10 TeV, was calculated for a range of different BDT cuts. The final
cut is placed at 0.40. See section 10.6.1 for more details. Further, studies were
made on the dependence of the BDT score on the zenith angle of the events.
No improvement was found by making a zenith angle dependent cut instead
of using one cut value for all zenith angles, why the latter was chosen.
Level 6 Requirements
• BDT score ≥ 0.4.
183
102
102
0.00 < cos θ ≤ 1.00
0.00 < cos θ ≤ 0.33
101
mHz per bin
mHz per bin
101
100
10−1
10−2
10−3
10−4
-1.00
100
10−1
10−2
10−3
-0.50
0.00
0.50
10−4
-1.00
1.00
-0.50
BDT Score
(a) BDT score, 0.00 < cos θ ≤ 1.00
1.00
102
0.33 < cos θ ≤ 0.66
0.66 < cos θ ≤ 1.00
101
mHz per bin
101
mHz per bin
0.50
(b) BDT score, 0.00 < cos θ ≤ 0.33
102
100
10−1
10−2
10−3
10−4
-1.00
0.00
BDT Score
100
10−1
10−2
10−3
-0.50
0.00
0.50
BDT Score
(c) BDT score, 0.33 < cos θ ≤ 0.66
1.00
10−4
-1.00
-0.50
0.00
0.50
1.00
BDT Score
(d) BDT score, 0.66 < cos θ ≤ 1.00
Figure 10.23. BDT scores from the final BDT with events from the whole SH (upper
left) (same as figure 10.22 but in logarithmic scale). All plots follow the figure legend
introduced in section 10.4. The upper right panel and the panels in the bottom row
show the BDT score for events divided into three bands using the zenith angle. The
transition from atmospheric muons (red dashed line) to atmospheric neutrinos (red
dotted line) can be seen for BDT scores slightly above 0.5. This transition is most
clear in the zenith band 0.00 < cos θ ≤ 0.33 (upper right), i.e., for horizontal and close
to horizontal events.
184
185
−0.4
−0.2
0.0
0.2
0.4
0.6
Total charge
Charge of late pulses
No. of direct pulses (D)
z-coord. of PTP
Hit dist. smoothness
rlogl
zenith (θ)
Dist. b/w 1st hit and PTP
z-coord. of COG
σz of COG
0.8
1.0
−0.8
z-coord. of COG
Dist. b/w 1st hit and PTP
zenith (θ)
rlogl
Hit dist. smoothness
z-coord. of PTP
No. of direct pulses (D)
Charge of late pulses
z-coord. of finiteReco
finiteReco xy-dist.
finiteReco rlogl
σz of COG
Total charge
Length of vLineFit
−1.0
−0.6
−0.4
−0.2
0.0
0.2
0.4
Figure 10.24. Correlation matrices for the final BDT for experimental data (left) and signal (right).
z-coord. of COG
Dist. b/w 1st hit and PTP
zenith (θ)
rlogl
Hit dist. smoothness
z-coord. of PTP
No. of direct pulses (D)
Charge of late pulses
z-coord. of finiteReco
finiteReco xy-dist.
finiteReco rlogl
σz of COG
Total charge
Length of vLineFit
−0.6
finiteReco rlogl
finiteReco xy-dist.
z-coord. of finiteReco
Length of vLineFit
−0.8
Charge of late pulses
No. of direct pulses (D)
z-coord. of PTP
Hit dist. smoothness
rlogl
zenith (θ)
Dist. b/w 1st hit and PTP
z-coord. of COG
σz of COG
−1.0
0.6
finiteReco rlogl
finiteReco xy-dist.
z-coord. of finiteReco
Length of vLineFit
Total charge
0.8
1.0
10.4.7 Level 7
At level 7 we perform several advanced reconstructions considering direction, angular uncertainty and energy, see chapter 7 for details. All of the reconstructions are required to succeed and the resulting directions are used to
reject events from the NH. A cut on the LineFit speed is included to reject
cascade-like events (see section 4.4). Further, a cut is applied to the corrected
paraboloid uncertainty, σRescaled , defined in the section ‘Angular Uncertainty Pull Correction’ below. The cut was optimized in relation to the sensitivity at
final level of the analysis, see details in section 10.6.1. The cut is placed at 5◦ .
Level 7 Requirements
• Reconstructed MPEFitSpline point to the SH (θ ≤ 90◦ ),
• Magnitude of vLineFit , ≥ 0.175 m/ns, see figure 10.25,
• Paraboloid σRescaled ≤ 5◦ , see figure 10.26b.
Angular Uncertainty - Pull Correction
The angular uncertainty is estimated using the paraboloid algorithm discussed
in section 7.3. The performance is evaluated using simulated signal events
weighted to an E−2
ν spectrum. Figure 10.26a shows the paraboloid σReco , calculated using equation 7.10, as a function of the true space angle between the
track reconstruction and the simulated neutrino direction. The black diagonal
shows the 1:1 correspondence. Ideally, the points would cluster around the
line, but for unknown reasons the algorithm more often underestimates the
angular uncertainty, a bias that becomes worse for higher energies.
0.07
30
0.06
25
Rate [Hz]
Rate [Hz]
0.05
20
15
10
0.03
0.02
5
0
0.00
0.04
0.01
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Magnitude of vLineFit [m/ns]
0.40
0.00
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
Magnitude of vLineFit [m/ns]
Figure 10.25. The left plot shows the magnitude of vLineFit at Level 2, while the right
plot shows the magnitude of vLineFit at the final level of the analysis. Both plots follow
the figure legend introduced in section 10.4, with additional green lines representing
electron-neutrinos. The green lines follow the same weighting scheme as for muonneutrinos, and are normalized to the experimental data. A cut at 0.175 effectively
removes the majority of the cascade-like events.
186
10−2
10
9
8
8
10
−3
6
5
4
10−4
3
7
σRescaled [◦]
σReco [◦]
7
4
10−4
1
0
1
2
3
4
5
6
ψTrue [◦]
7
8
9
10
10−5
0
(a) Paraboloid σ
0
1
2
3
4
5
6
ψTrue [◦]
7
8
9
10
10−2
10−2
20
18
18
16
14
10−3
12
10
8
10
6
−4
Pull (ψTrue/σ)
16
4
14
10−3
12
10
8
10−4
6
4
2
2
0
1
2
3
4
5
10−5
0
0
1
2
3
4
log10(Energy Proxy)
log10(Energy Proxy)
(c) Pull distribution
(d) Corrected pull distribution
5
10−2
10
10−3
100
10−4
10−1
1
2
3
4
log10(Energy Proxy)
(e) Pull distribution
5
10−5
10−5
10−2
1
Pull (ψTrue/σ)
10
Pull (ψTrue/σ)
10−5
(b) Corrected paraboloid, σRescaled
20
Pull (ψTrue/σ)
5
2
1
0
10−3
6
3
2
0
10−2
10
9
1
10−3
100
10−4
10−1
1
2
3
4
5
10−5
log10(Energy Proxy)
(f) Corrected pull distribution
Figure 10.26. The upper row shows the estimate of the angular uncertainty σReco as a
function of the true space angle before (left) and after (right) the pull correction. The
second row (linear scale) and third row (log scale) show the pull defined according
to equation 7.11 as a function of reconstructed energy. The correction function is
constructed by fitting a 4th degree polynomial (black dashed line) to the median pull
in each bin and returning it to the ideal median equal to one (red dashed line). All
plots in this figure show simulated signal events weighted to E−2
ν , normalized to the
experimental data rate at the same analysis level.
187
0.008
0.007
Rate [mHz]
0.006
0.005
0.004
0.003
0.002
0.001
0.000
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
6.0
log10(Eν ) [GeV]
Figure 10.27. Event distributions for several signal spectra at the final level of the
event selection. Neutrino signal hypotheses are shown in blue lines: E−2
ν (light blue,
−Eν /10 TeV (dashed), E−2 e−Eν /1 TeV (solid), and E−3 (dotted). All signal
e
solid), E−2
ν
ν
ν
events are truly down-going CC events that truly interact within the detector (using
criteria p = 0.8 and z ≤ 300 m). Atmospheric muon-neutrinos νμ are shown as a red
dotted line, and correspond to the rate predicted by the Honda model, given a livetime
of 329 days. The red solid line shows atmospheric muon-neutrinos νμ contained using
the same definition as for signal. All signal lines are normalized to the red solid line.
To quantify the discrepancy we define the pull distribution according to
equation 7.11. Figure 10.26c (linear scale) and 10.26e (logarithmic scale)
show the pull as a function of the reconstructed energy (denoted ‘Energy
Proxy’). The desired ideal pull value of 1 is shown as a red dashed line. The
correction function consists of a 4th degree polynomial fit (black dashed line)
to the median pull in each energy bin (black dots). Also shown is the standard
deviation in each bin (gray shadow). The correction function is derived for a
signal spectrum of E−2
ν but will be applied generally to all signal hypotheses.
A biased estimator of the angular uncertainty can lead to a clear degradation of the performance of the final point source likelihood results why the
correction is applied to all events in the final sample in order to return the
median pull to one throughout the energy spectrum. The result is shown in
figure 10.26d (linear scale) and 10.26f (logarithmic scale). The pull corrected
paraboloid value, σRescaled , is used as an estimate of the angular uncertainty in
the following sections.
188
10.5 Event Selection Summary
Table 10.4 shows a summary of the event rates at different cut levels of the
analysis. The values represent the rates for experimental data and the simulated atmospheric background (μ and νμ ), and the signal retention for various
signal hypotheses. All values are calculated after the cuts on the corresponding level have been applied and are subject to statistical variations as well as
uncertainties in the models assumed. The final level of experimental data consists of 6191 events for a livetime of 329 days. All signal events are truly
down-going CC events that interact within the detector (using criteria p = 0.8
and ≤ 300 m). At final level, level 7, we expect about a factor of two more
events from atmospheric muons, compared to atmospheric neutrinos. The signal efficiency is presented relative to level 2. For an E−2
ν -spectrum it is about
7.5%, and slight lower for the softer spectra.
The neutrino simulations used for the analysis in this thesis do not include
the accompanying muons from the CR shower. Such events are likely to be
vetoed by the analysis, if the muons reach the detector with sufficient energy.
The effect of this so-called self-vetoing is expected to be more pronounced at
neutrino energies 10 TeV [153, 154], i.e., above the energies relevant for this
analysis. Note that the rate estimates of atmospheric muons and atmospheric
neutrinos do not exclude a contribution from signal events.
10.5.1 Signal
Figure 10.27 shows the event distributions for several signal spectra at the
final level of the event selection. Atmospheric muon-neutrinos νμ are shown
as a red dotted line, and correspond to the rate predicted by the Honda model.
Neutrino signal hypotheses are shown in blue lines. All signal events are truly
down-going CC events that truly interact within the detector (using criteria
p = 0.8 and z ≤ 300 m). The red solid line shows atmospheric muon-neutrinos
νμ contained using the same definition.
The neutrino effective area for the final level sample is shown in the upper
panel of figure 10.28. It is calculated using equation 6.3 in section 6.2. The
effective area shown is the average of the area for νμ and ν̄μ . The solid line
shows the distribution averaged over the whole SH, while the dashed, dotted,
and dash-dotted lines show the distribution in declination bands of the true
neutrino direction. The effective area is shown for the energies relevant to this
analysis.
The lower panel of figure 10.28 shows the effective target mass, based on
the effective volume of the detector, defined in section 6.2, with target density
0.92 g cm−3 [2]. The target mass represents the efficiency to detect a muon
created inside the detector. This is in contrast to the definition of effective area
which also includes the neutrino interaction probability in order to obtain the
event rate directly from a flux model. The target mass is roughly 0.1-20 MTon
189
190
158 Hz
29 Hz
5.1 Hz
279 mHz
0.30 mHz
0.16 mHz
Exp. data
∼ 2, 500 Hz
171 Hz
32 Hz
5.2 Hz
270 mHz
0.43 mHz
0.22 mHz
Level
Level 1
Level 2
Level 3
Level 4
Level 5
Level 6
Level 7
Atmo. μ
0.08 mHz
0.13 mHz
0.46 mHz
1.1 mHz
3.4 mHz
5.3 mHz
Atmo. νμ
7.5%
11.3%
18.2%
45.5%
80.0%
100%
E−2
ν
6.3%
10.0%
21.0%
47.2%
83.1%
100%
−Eν /10 TeV
E−2
ν e
3.8%
6.0%
19.8%
42.9%
82.5%
100%
−Eν /1 TeV
E−2
ν e
2.0%
4.0%
14.9%
31.2%
76.8%
100%
E−3
ν
Table 10.4. Summary of the event rates after different levels of the event selection. The values shown represent the rate for experimental data
and the simulated atmospheric background (μ and νμ ), and signal retention for four different spectra ranging from hard to soft. All signal events
are truly down-going, truly interacting (through CC) inside p = 0.8 and z ≤ 300 m (see figure 8.1). The livetime of the experimental data is 329
days.
100
Effective area [m2]
10−1
10−2
10−3
10−4
−90◦ ≤ δ
−90◦ ≤ δ
−60◦ ≤ δ
−30◦ ≤ δ
10−5
10−6
1.5
2.0
2.5
3.0
3.5
4.0
< 0◦
< −60◦
< −30◦
< 0◦
4.5
5.0
log10(Eν ) [GeV]
Effective target mass [MTon]
102
101
100
10−1
10−2
−90◦ ≤ δ
−90◦ ≤ δ
−60◦ ≤ δ
−30◦ ≤ δ
10−3
10−4
1.5
2.0
2.5
3.0
3.5
4.0
< 0◦
< −60◦
< −30◦
< 0◦
4.5
5.0
log10(Eν ) [GeV]
Figure 10.28. Effective area (upper) and effective target mass (lower) at final level.
191
for the LESE analysis, depending on energy. As a comparison, the HESE
analysis, has an effective target mass of roughly 10 MTon for 10 TeV, and 500
MTon for energies above 100 TeV [3].
10.6 Likelihood Analysis
We search for point sources using the un-binned maximum likelihood method
presented in chapter 9, for the first year of data from the 86-string configuration of IceCube. We fit two parameters: the number of signal events nS and
the spectral index γ of an unbroken power-law. The likelihood requires three
observables for each event: a reconstructed direction (MPEFitSpline), an estimate of the angular uncertainty (σRescaled ), and an estimate of the neutrino
energy (sum of Millipede depositions contained in the instrumented volume).
Two different searches are performed. One is looking for evidence for a
neutrino source anywhere in the SH and is not motivated by any prior information of where such a source might be located. The other searches for an
excess of signal-like emission at coordinates from a pre-defined list of known
sources of electromagnetic radiation.
10.6.1 Sensitivity
The performance of the analysis is estimated by observing the response of socalled pseudo-experiments, sets of randomized experimental data where signal events from an astrophysical spectrum at a pre-defined location and with a
configured strength can be included. The test statistic TS defined in equation
9.6 is maximized for each pseudo-experiment. The sensitivity of the analysis
is defined as the flux level at which 90% of pseudo-experiments without injected signal, i.e., nS = 0, give a p-value less than 0.5. This corresponds to the
median upper limit in the absence of signal, i.e., from background fluctuations
only. Further, discovery potential is defined by injecting signal events up to
a the flux level at which 50% of all pseudo-experiments give a p-value corresponding to a 5σ discovery. For the declination shown in figure 9.4 this means
we demand that 50% of the pseudo-experiments are above TS ∼ 19, where
only ∼ 0.00003% (2.85 · 10−5 %) [2] of the background-only experiments are
expected.
As mentioned in section 10.4.6 we optimize the cut on the BDT score to
achieve the best sensitivity over the whole SH. The upper panel in figure 10.29
shows the sensitivity for an E−2
ν with a hard cut-off at 10 TeV and with different
cuts applied to the BDT score. We found that a cut at 0.40 results in the
best sensitivity considering the full range of declinations. The same cut also
−2
provides the optimal sensitivity for an unbroken E−2
ν spectrum and an Eν
spectrum with a hard cut-off at 1 TeV. Note that this investigation was done
without a cut on σRescaled as discussed next. The cut on σRescaled was optimized
192
10−8
Φ0 [TeVcm−2s−1]
BDT score cut 0.30
BDT score cut 0.40
BDT score cut 0.50
BDT score cut 0.60
10−9
−1.0
−0.8
−0.6
−0.4
−0.2
0.0
Φ0 [TeVcm−2s−1]
sin δ
σRescaled = 1◦
σRescaled = 2◦
σRescaled = 3◦
10−8
σRescaled = 4◦
σRescaled = 5◦
no cut
10−9
−1.0
−0.8
−0.6
−0.4
−0.2
0.0
sin δ
Figure 10.29. The plot in the upper panel shows the sensitivity for different cuts on
the BDT score. The plot in the lower panel shows the sensitivity for different cuts
on σRescaled . Both plots are constructed for an E−2
ν spectrum with a hard cut-off at 10
TeV.
193
to achieve the best sensitivity after application of a cut on the BDT score at
0.40. The lower panel in figure 10.29 shows the sensitivity for an E−2
ν with
a hard cut-off at 10 TeV and different cuts applied to σRescaled . The orange
line represent the sensitivity without a cut. A small improvement was found
cutting at 5◦ . This was also investigated for an unbroken E−2
ν spectra and
an E−2
ν with a hard cut-off at 1 TeV, where the same conclusion was made.
Note that both optimizations described above were done before a model with
exponential cut-off was implemented in the likelihood algorithm. However,
the difference in sensitivity between a model with a hard cut-off at an energy
Ecut and a model with an exponential cut-off at the same energy is observed to
be small.
Figure 10.30 shows the median sensitivity of the LESE analysis for four
−Eν /1 TeV , Φ E−3 , Φ · E−2 e−Eν /10 TeV ,
signal hypotheses, dΦ/dEν = Φ0 · E−2
0 ν
0
ν e
ν
and Φ0 · E−2
ν , as a function of declination, and corresponding to 1 year of data.
Further, it shows the upper limits Φ90%
νμ +ν̄μ (markers) at 90% C.L., for sources
in a pre-defined list, see section 10.6.4. Note that the vertical axis show the
flux in terms of Φ0 in units TeV(γ−1) cm−2 s−1 , where γ is the spectral index for
each hypothesis, respectively.
The upper panel in figure 10.31 shows the differential sensitivity of the
LESE analysis, created by simulating point sources with an E−2
ν spectrum
over quarter-decades in energy. The lower panel illustrates the joint efforts
within the IceCube collaboration towards lowering the energy threshold for
point source searches. It is also calculated by simulating point sources with an
Φ0 [TeV(γ−1)cm−2s−1]
10−7
E−2e−E/ 1 TeV
E−3
E−2e−E/ 10 TeV
E−2
IceCube Preliminary
10−8
10−9
10−10
−1.0
Livetime: 329 d
−0.8
−0.6
−0.4
−0.2
0.0
sin δ
Figure 10.30. Median sensitivities of the LESE analysis as a function of declination
(line) and source upper limits (markers) at 90% C.L., for four different signal spectra.
194
E2dN/dE [TeVcm−2s−1]
10−5
δ = −10◦
δ = −40◦
δ = −70◦
10−6
10−7
10−8
10−9 2
10
103
104
105
106
Eν [GeV]
E2dN/dE [TeVcm−2s−1]
10−6
LESE (3y∗)
IceCube Preliminary
STeVE (3y∗)
10−7
MESE (3y)
Through–going (3y∗)
*scaled from 1y
10−8
10−9
10−10 2
10
δ = −60.0◦
103
104
105
106
107
108
109
Eν (GeV)
Figure 10.31. The upper panel shows the differential sensitivity (solid lines) and discovery potential (dashed lines) for the LESE analysis at three different declinations
in the SH, corresponding to 1 year of operation. The lower panel shows differential
sensitivities at δ = −60◦ , for four different IceCube analysis targeting the SH, corresponding to 3 years of operation.
195
E−2
ν spectrum over quarter-decades in energy. It shows the differential sensitivity at δ = −60◦ for four different analyses: the LESE analysis optimized in the
energy range 100 GeV - a few TeV, the STeVE (Starting TeV Events) analysis
optimized at slightly higher energies (a few TeV - 100 TeV) [144], the MESE
analysis which focused on energies in the region 100 TeV - 1 PeV [150], and
the classic through-going point source analysis in black [149]. The sensitivities in this figure correspond to 3 years of operation and were estimated by
simulating events with the livetime corresponding to the MESE analysis. Together these analyses cover an unprecedented energy range from about 100
GeV to EeV, almost 8 decades.
10.6.2 Test Statistic
The test statistic, TS, PDFs are shown in figures 10.32 and 10.33, for several declinations. The red histogram represents the TS-values obtained for
the null hypothesis, i.e., without injection of signal events. The orange and
green histograms show the distribution obtained for the sensitivity and discov−Eν /10 TeV
ery potential, respectively. The two latter are calculated for an E−2
ν e
2
spectrum. Also shown, as dotted black lines, are the theoretical χ function
with two degrees of freedom, discussed in section 9.2. The dashed lines show
functional fits to the observed TS distributions for the null hypothesis. The
function consists of a χ2 distribution plus a Dirac delta distribution at zero.
The function is defined as:
TS = η · χ2 (nS ; ndof),
(10.6)
where η is the ratio of pseudo-experiments with TS > 0 to pseudo-experiments
with TS = 0, a correction factor caused by the restriction nS ≥ 0, and ndof is
the number of effective degrees of freedom.
Figure 10.34 show the two parameters of the function fitted to the TS distributions of the null hypothesis, as a function of declination. The ndof parameter
is shown in black open circles. The horizontal line at 2.0 represent the theoretical value of ndof. It is clear from this figure that the actual ndof used in
the likelihood is closer to 1.25, around which it fluctuates. A spline is fitted to
the data points of each distribution and used as a template for all declinations
when calculating the pre-trial p-values of the observed data sample. In the
edge regions sin δ < −0.9 and sin δ > −0.1, we use the default value of 1.25.
A similar spline construction is made for the η parameter shown as gray open
circles. This value fluctuates around 0.5, i.e., 50% of the pseudo-experiments
for the null hypothesis generate an over fluctuation (TS > 0). The default value
used for the spline in the edge regions, is 0.5. All p-values are calculated using equation 10.6 with the values given by the parametrization of the η and
ndof distributions. The p-values are robust against systematic uncertainties
since they are calculated using thousands of pseudo-experiments on datasets
randomized in right ascension.
196
100
100
χ2 Fit
δ = 0◦ (sin δ = 0.00)
10−1
χ2 Theory
Discovery Potential
Sensitivity
Background
10−2
PDF
PDF
10−2
10−3
10−3
10−4
10−4
10−5
10−5
10−6
0
10
20
30
40
50
60
70
δ = −10◦ (sin δ = −0.17)
10−1
10−6
80
0
10
20
Test statistic T S
100
δ = −20 (sin δ = −0.34)
10−1
PDF
PDF
60
70
80
60
70
80
60
70
80
10−2
10−3
10−3
10−4
10−4
10−5
10−5
0
10
20
30
40
50
60
70
10−6
80
0
10
20
Test statistic T S
30
40
50
Test statistic T S
100
100
δ = −40◦ (sin δ = −0.64)
10−1
δ = −50◦ (sin δ = −0.77)
10−1
10−2
PDF
10−2
PDF
50
δ = −30◦ (sin δ = −0.50)
10−1
10−2
10−3
10−3
10−4
10−4
10−5
10−5
10−6
40
100
◦
10−6
30
Test statistic T S
0
10
20
30
40
50
Test statistic T S
60
70
80
10−6
0
10
20
30
40
50
Test statistic T S
Figure 10.32. TS PDFs for declinations in the range from 0◦ to −50◦ , in steps of 10◦ .
The TS-values for the null hypothesis are shown in red. The values obtained for the
flux corresponding to the sensitivity and discovery potential is shown in orange and
−Eν /10 TeV spectrum. The
green, respectively. The two latter are calculated for an E−2
ν e
2
theoretical χ function with two degrees of freedom is shown as dotted black lines,
while the functional fit to the observed TS distribution, using equation 10.6, for the
null hypothesis is shown as a dashed line.
197
100
100
◦
δ = −60 (sin δ = −0.87)
10−1
10−2
PDF
PDF
10−2
10−3
10−3
10−4
10−4
10−5
10−5
10−6
δ = −70◦ (sin δ = −0.94)
10−1
0
10
20
30
40
50
Test statistic T S
60
70
80
10−6
0
10
20
30
40
50
60
70
80
Test statistic T S
Figure 10.33. TS PDFs for declinations −60◦ and −70◦ . The TS-values for the null
hypothesis are shown in red. The values obtained for the flux corresponding to the
sensitivity and discovery potential is shown in orange and green, respectively. The
−Eν /10 TeV spectrum. The theoretical χ2 function
two latter are calculated for an E−2
ν e
with two degrees of freedom is shown as dotted black lines, while the functional fit to
the observed TS distribution, using equation 10.6, for the null hypothesis is shown as
a dashed line.
Figure 10.34. The parameters of equation 10.6, fitted to the TS distributions for the
null hypothesis, as a function of declination. The ndof (η) parameter is shown in black
(gray) open circles. The horizontal line at 2.0 (0.5) represent the theoretical value of
ndof (η). A spline is fitted to the data points of each distribution and used as a template
for all declinations when calculating the pre-trial p-values of the observed data sample.
198
10.6.3 Sky Search
An all sky search is performed, looking for an excess of neutrino candidate
events in the SH, excluding a circle of 5◦ around the pole13 . This search is
not motivated by any prior knowledge regarding the position of the sources,
instead we are limited by the large number of trials performed, which together
with the angular resolution gives a limit on the effective number of individual
points tested. The search is done on a HEALPix14 [155] grid corresponding to
a resolution of about 0.5◦ . The most interesting points are followed up with a
finer binning.
The so-called pre-trial p-values are calculated for each spot on the grid,
using equation 10.6 with the parameters extracted from the spline fit shown in
figure 10.34. The p-values represent the probability of compatibility with the
background-only hypothesis. The most significant spot, the so-called hotspot,
is reported from this search. A so-called trial correction is performed to take
into account the number of trials (searches) performed on the sky. This is done
by repeating the sky search on the entire SH for pure background samples, i.e.,
where the R.A. directions are scrambled. The distribution of pre-trial p-values
for the hottest spot in the SH, for 10,000 randomized trials, is used to calculate
a final so-called post-trial p-value. It is defined as the fraction of trials which
give a p-value equal or greater than the observed pre-trial p-value.
10.6.4 Search in a List of 96 Candidate Sources
We also perform a search at positions in a pre-defined source list of 96 astrophysical objects: all 84 TeVCat15 [156] sources in the SH (δ ≤ 0◦ ) (as of May
2015) and in addition 12 sources in the standard IceCube point source list:
• GX 339-4, Cir X-1, PKS 0426-380, PKS 0537-441, QSO 2022-077, PKS 1406-076,
PKS 0727-11, QSO 1730-130, PKS 0454-234, PKS 1622-297, PKS 1454-354,
ESO 139-G12.
The TeVCat listing consists of sources seen by ground-based γ-ray experiments, such as Very Energetic Radiation Imaging Telescope Array System
(VERITAS), MAGIC (Major Atmospheric Gamma Imaging Cherenkov Telescopes), and HESS. Here we consider sources from the catalogs ‘Default Catalog’ and ‘Newly Announced’, stable sources published or pending publication
in refereed journals. The Galactic sources include e.g. SNRs, PWNs, star formation regions, and compact binary systems, while the extragalactic sources
include e.g. AGNs and Radio Galaxies. Figure 10.35 shows all 96 sources in
a sky map using Galactic (Equatorial) coordinates in the upper (lower) panel.
The sources are categorized as they appear in the TeVCat listing. Many of
13 The
likelihood algorithm is not efficient close to the pole, since we run out of disjoint scrambling sets.
14 http://healpix.sourceforge.net
15 http://tevcat.uchicago.edu
199
Galactic Plane
SNR
Superbubbles
PWN
Binary
Not identified
DARK
Massive Star Cluster
BL Lac
Fanaroff-Riley I
Galactic Center
Starburst
Globular cluster
FSRQ
XB/mqso
Seyfert
+75◦
+45◦
+15◦
−180◦
+180◦
−15◦
−45◦
Galactic
−75◦
5◦
0◦
-5◦ ◦
-90
-75◦
-60◦
-45◦
-30◦
-15◦
0◦
15◦
30◦
45◦
60◦
75◦
90◦
(a) Galactic coordinates
+75◦
+45◦
+15◦
24h
0h
−15◦
−45◦
Equatorial
−75◦
(b) Equatorial coordinates
Figure 10.35. Sky map of the 96 sources considered in the source list scan. The map
is shown in galactic (a) and equatorial (b) coordinates (J2000). The source position of
each of the sources is indicated, divided into categories as they appear in the TeVCat
listing. The Equatorial (Galactic) plane is shown as a curved black line in the upper
(lower) plot.
200
these sources have small angular distances from each other, smaller than the
angular uncertainty of the analysis. This effect of correlation is included in
the TS evaluation as the scrambling is done using the actual, unaltered, source
list.
The p-values are calculated for each of the 96 sources in the source list,
using equation 10.6 with parameters shown in figure 10.34. The p-value for
the most significant source is trial corrected using background-only scrambled
datasets. The procedure follows the one described in section 10.6.3, but only
considers the hotspot among the 96 sources in each trial.
10.7 Results
The events in the final sample of experimental data were unblinded, i.e., the
true event timestamps were used to convert the local coordinates to equatorial
coordinates (α, δ).
The resulting p-values from the sky search are shown in figure 10.36, where
the hotspot is marked with a black circle. The figure is shown in equatorial
coordinates. The hotspot, is located at α = 305.2◦ and δ = −8.5◦ , with best fit
parameters n̂S = 18.3 and γ̂ = 3.5. It has a pre-trial p-value of 1.6 · 10−4 .
The distribution of pre-trial p-values from scrambled trials is shown in figure 10.37. The vertical lines represent the 1, 2, and 3 σ limits, respectively, calculated from the pre-trial p-value distribution. The observed pre-trial p-value
is indicated with a solid vertical line. The corresponding post-trial p-value
equals 88.1%, i.e., the search is well compatible with the expectation from
background. The best-fit values, n̂S and γ̂ are shown for all bins in the sky
scan in the upper and lower panel of figure 10.38, respectively. Figure 10.39
shows n̂S as function of the number of injected events, and is constructed from
trials with signal at varying strengths injected on top of a randomized experimental data sample at the declination of the hottest spot, i.e., δ = −8.5◦ . The
number of fitted signal events n̂S for the hotspot is shown as a black horizontal
line. The observed value of n̂S = 18.3 corresponds roughly to about 15 injected
events. Note that this illustrates the situation before trial correction is applied.
In the upper panel of figure 10.40 we show a zoom-in on the TS distribution
in the area surrounding the hottest spot in the sky search. The position of the
hottest spot is illustrated with a red open circle with a cross. The hottest source
in the a priori source list is shown with a red open circle without a cross. The
individual event positions in the final sample are illustrated as black dots. The
lower panel of figure 10.40 shows a zoom-in on the same coordinates, with
color indicating the so-called individual event weight ln (1 + nS χi ) for each
event contributing to the test statistic at this particular position and with the
best-fit parameters of the hottest spot (see equation 9.6),
201
202
Figure 10.36. Pre-trial significance sky map in equatorial coordinates (J2000) of the sky search in the SH. The black line indicates the Galactic
plane. The position of the most significant position is indicated with a black circle.
where χi is defined as:
χi =
1 Si
Wi − 1 .
N S bkg,i
(10.7)
The most significant source in the a priori list was QSO 2022-07716 , located
at α = 306.5◦ and δ = −7.6◦ , a FSRQ with a pre-trial p-value of 2.5 · 10−3 . The
p-value distribution for the scrambled trials using the pre-defined source list is
shown in gray in figure 10.41. The vertical lines represent the 1, 2, and 3 σ limits, respectively. Note that the scale of the p-values is significantly lower than
the corresponding values in figure 10.36 for the sky scan. This is due to the
much lower number of effective trials in the search using a pre-defined source
list. The observed pre-trial p-value for the hottest source is indicated with a
solid vertical line. The post-trial p-value equals 14.8%, and is compatible with
a pure background scenario. For the sources in the list, we calculate the upper
limit at 90% Confidence Level (C.L.), based on the classical approach [157],
−2 −Eν /10 TeV , E−2 e−Eν /1 TeV , and E−3 , see
for the signal hypotheses: E−2
ν , Eν e
ν
ν
figure 10.30. The limits of the five most significant sources are presented in
table 10.5 along with the best-fit parameters n̂S and γ̂. Note that we do not
consider under-fluctuations, i.e., for observed values of TS below the median
TS for the background-only hypothesis, we report the corresponding median
upper limit.
16 Alternate
name: 2EG J2023-0836
0.07
1σ
2σ
3σ
Observed p − value
Scrambled trials
Fraction of trials
0.06
0.05
0.04
0.03
0.02
0.01
0.00
2.0
3.0
4.0
5.0
6.0
7.0
8.0
− log10(p-value)
Figure 10.37. The distribution of the smallest p-value in the Southern hemisphere,
obtained from randomized data. The vertical lines represent the 1, 2, and 3 σ limits,
respectively. The observed pre-trial p-value is indicated with a solid vertical line.
203
Figure 10.38. The upper (lower) plot shows the distribution of the best-fit values n̂S
(γ̂) of the observed data sample in the sky search in the SH. The sky maps are shown
in equatorial coordinates (J2000). The black line indicates the Galactic plane. The
position of the most significant position is indicated with a black circle. The sky map
for γ̂ illustrates the distribution of values for directions with n̂S > 0.
204
10−1
Number of Fitted Events
80
70
60
10−2
50
40
30
10−3
20
10
0
0
10
20
30
40
50
10−4
Number of Injected Events
Figure 10.39. Distribution of the number of fitted signal events n̂S as function of the
number of injected events at the declination of the hottest spot, i.e., δ = −8.5◦ . The plot
is constructed from trials where signal events from an unbroken E−2
ν neutrino spectra
with a hard cut-off at 10 TeV, are injected at a fixed position, on top of a randomized
experimental data sample. For zero injected events we used 100,000 trials, while for
the others we used 10,000 trials. The entries in each bin on the horizontal axis (the
number of injected events) are normalized so that the sum in each column equals
1. The horizontal black line show the observed best-fit n̂S for the hottest spot in the
Southern hemisphere sky search.
205
Figure 10.40. The upper panel shows a zoom-in on the test statistic TS distribution
in the area surrounding the hottest spot (red circle with red cross). The lower panel
shows the individual event weights χi defined in equation 10.7, normalized so that the
highest weight corresponds to 1. The black dots show the individual event positions of
the final sample. The position of the hottest source in pre-defined source list is shown
with a red open circle without a cross.
206
207
R.A. [◦ ]
306.5
259.5
280.3
8.4
83.9
156.6
81.3
84.3
228.3
11.9
Source
QSO 2022-077
HESS J1718-385
HESS J1841-055
KUV 00311-1938
30 Dor-C
HESS J1026-582
LMC N132D
LHA 120
PKS 1510-089
NGC 253
-7.6
-38.5
-5.5
-19.4
-69.1
-58.2
-69.6
-69.1
-9.1
-25.3
dec. [◦ ]
2.61
1.58
1.48
1.40
1.04
1.04
1.03
1.01
1.00
1.00
− log10 (p-val.)
17.3
5.9
5.0
7.1
2.3
2.0
2.1
2.2
6.3
5.3
n̂S
3.5
3.4
2.4
3.4
3.6
3.0
3.7
3.6
4.5
3.3
γ̂
E−2
ν
6.74·10−10
7.62·10−10
5.01·10−10
8.25·10−10
9.14·10−10
6.67·10−10
9.41·10−10
9.19·10−10
4.53·10−10
7.10·10−10
−Eν /10 TeV
E−2
ν e
1.56·10−9
2.80·10−9
1.19·10−9
2.47·10−9
3.65·10−9
2.61·10−9
3.64·10−9
3.56·10−9
1.02·10−9
2.83·10−9
−Eν /1 TeV
E−2
ν e
5.81·10−9
3.21·10−8
4.52·10−9
1.33·10−8
5.35·10−8
3.37·10−8
5.38·10−8
5.33·10−8
3.79·10−9
2.05·10−8
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
1.27·10−9
5.58·10−9
9.88·10−10
2.65·10−9
7.92·10−9
5.39·10−9
7.85·10−9
7.78·10−9
8.16·10−10
3.88·10−9
Table 10.5. Results for the 10 most significant astrophysical sources in the a priori search list. Sources are shown in descending order of
p-value. The p-values are pre-trial probability of compatibility with the background-only hypothesis. The n̂S and γ̂ columns give the best-fit
values for the number of signal events and spectral index of an unbroken power-law, respectively. The last four columns show the 90% C.L.
upper flux limits for νμ + ν̄μ , based on the classical approach [157], for different source spectra normalizations in units TeVγ−1 cm−2 s−1 .
0.08
1σ
2σ
3σ
Observed p − value
Scrambled trials
0.07
Fraction of trials
0.06
0.05
0.04
0.03
0.02
0.01
0.00
0.0
1.0
2.0
3.0
4.0
5.0
6.0
7.0
− log10(p-value)
Figure 10.41. The distribution of the smallest p-value out of the 96 objects in the predefined source list, obtained from randomized data. The vertical lines represent the 1,
2, and 3 σ limits, respectively. The observed pre-trial p-value is indicated with a solid
vertical line.
208
Table 10.6. Datasets used in the evaluation of systematic uncertainties.
Generator
Type
Dataset
Notes
NuGen
νμ
10602
Baseline
νμ
10039
Absorption +10%
νμ
10040
Scattering +10%
νμ
10041
Scattering and absorption -10%
νμ
10437
DOM efficiency -10%
νμ
10438
DOM efficiency +10%
10.8 Systematic Uncertainties
The analysis described in this chapter uses scrambled experimental data to
estimate the background. The p-values are not sensitive to theoretical uncertainties on fluxes of atmospheric neutrinos and muons, and are robust against
uncertainties in the simulation of the detector. The sensitivity and upper limits
however are calculated using simulated neutrino events, including simulations
of the detector response, detector medium, etc. These are therefore affected
by the systematic uncertainties on these quantities.
The systematic uncertainties of main importance in this analysis are the
ones related to the detector efficiency such as the ice model used, uncertainties in the neutrino-nucleon cross-sections, and the absolute light detection
efficiency of the DOM (DOM efficiency). The impact of systematic uncertainties is evaluated by studying the change in sensitivity when injecting signal events from a model different from the so-called baseline model (dataset
10602) used in the analysis up to this point. Note that the analysis chain including all cuts is unchanged during these studies. We use a set of five signal
simulations, each modified with respect to one of the systematic uncertainties
considered, see table 10.6. Each row in the table specifies the neutrino flavor,
dataset number, and change in the configuration.
Ice Properties
One of the largest uncertainties comes from the uncertainty in the modeling of
the ice in the detector. Two parameters, the scattering and absorption lengths,
see section 5.2, were varied with respect to the baseline values. Note that the
uncertainties in these parameters are correlated through the ice model. We
study three different datasets: dataset 10039 and 10040 have a 10% increase
in the absorption and scattering length, respectively, while dataset 10041 has
a 10% decrease in both the scattering and absorption lengths. This leads to a
209
systematic uncertainty of 4% on the sensitivity for an E−2
ν spectrum, and 15%
on the sensitivity for a spectrum with an exponential cut-off at 10 TeV.
Detector Properties
The uncertainty on the absolute efficiency of the optical modules, the so-called
DOM efficiency (see section 6.1.3), is studied using a conservative estimate of
±10%. This is a linear scaling of how well the light is converted to electrical
signals in the DOMs, including PMT quantum efficiency, transmittance of the
optical gel and glass housing, local hole ice, etc. The 10% roughly corresponds
to the standard deviation of this quantity as measured in the laboratory and
again after deployment. The effect for all spectra considered is about 10%,
see table 10.7.
Cross-Section
As mentioned in section 6.1.1, the baseline simulation of muon-neutrinos is
configured with CTEQ5 [104] PDFs of the parton functions and cross-sections
from [103]. Another cross-section model used in IceCube is CSMS (CooperSarkar, Mertsch and Sarkar) [158]. It yields a slightly lower event rate, ≤5%,
compared to CTEQ5. This value is used to represent the systematic uncertainty in the cross-section for all neutrino spectra considered.
Analysis Method
The final likelihood depends on a few random components (scrambling of
dataset, Poisson selection from number of signal events, etc.). The effect of
this randomness was estimated by calculating the sensitivity for the baseline
dataset with different input seeds. This calculation shows that is it less than
0.5%, and is hence negligible compared to the other systematic uncertainties
considered.
Further, there is a probability that the IceTop Veto applied in the event selection, see section 10.4.5, will veto signal neutrinos by random coincidence.
This effect was estimated to roughly 0.13% and can hence also be neglected
in comparison to the impact of other uncertainties.
Results
The results from the systematics studies are shown in table 10.7 and figure
10.42, illustrating the resulting uncertainty on the sensitivity for four different
signal spectra. The numbers presented in the table represent the square-root of
the sum of quadratic contributions from each of the categories of systematics
considered. These are rough estimates, and subject to statistical uncertainties
due to the limited number of simulated events in the datasets used. The total
uncertainty on each flux model is also illustrated as bands in figure 10.42. The
band widths correspond to the values in the bottom line of table 10.7, for each
flux model.
210
Φ0 [TeV(γ−1)cm−2s−1]
10−7
E−2e−E/ 1 TeV
E−3
E−2e−E/ 10 TeV
E−2
10−8
10−9
10−10
−1.0
−0.8
−0.6
−0.4
−0.2
0.0
sin δ
Figure 10.42. Systematics impact on sensitivity shown for four different neutrino
spectra. Each band represents the square-root of the quadratic sum of the contributions
from all systematics considered, for each flux model.
Table 10.7. A summary of the systematics considered and the impact on the final
sensitivities for various neutrino spectra. The total is the square-root of the quadratic
sum of each contribution, for each column, respectively.
Systematic Uncertainty
E−2
ν
−Eν /10 TeV
E−2
ν e
−Eν /1 TeV
E−2
ν e
E−3
ν
Absorption & scattering
4%
15%
15%
13%
DOM efficiency
12%
8%
9%
7%
Cross-section
≤ 5%
≤ 5%
≤ 5%
≤ 5%
Total
14%
18%
18%
16%
211
11. Summary and Outlook
"Neutrino physics is largely an art of learning
a great deal by observing nothing."
Haim Harari
In this thesis we presented the results of searches for astrophysical point
sources of neutrinos in the Southern hemisphere (SH) using data from the
first year of the completed 86-string configuration of IceCube, the so-called
Low-Energy Starting Events (LESE) analysis. The analysis builds on an event
selection of low-energy neutrino candidates, tracks starting inside the instrumented volume, and is sensitive to neutrinos in the range from 100 GeV to
a few TeV. Background rejection is achieved by using several advanced veto
techniques. The final sample of experimental data consists of 6191 events and
corresponds to a livetime of 329 days. The median angular resolution at final
−Eν /10 TeV spectrum and slightly better for an unbroken
level is 1.7◦ for an E−2
ν e
−2
Eν spectrum.
The events in the final sample were used in searches for spacial clustering
in the SH: one search scanning the whole SH and another targeting a list of
96 pre-defined sources. The latter were selected from a list of known γ-ray
emitters seen by ground-based experiments such as VERITAS, MAGIC, and
HESS. No evidence of neutrino emission from point-like sources in the SH at
low-energies (100 GeV - few TeV) was found. The most significant spot in
the sky search has a post-trial p-value of 88.1% and was fit with 18.3 signal
spectrum. The most significant source in the source list
events with an E−3.5
ν
has a post-trial p-value of 14.8% and was fit with 17.3 signal events with an
spectrum. Upper limits at 90% C.L. were calculated for all sources for
E−3.5
ν
a number of signal hypotheses, see figure 10.30 and the tables in appendix B
(the 10 most significant sources are also shown in table 10.5).
These searches are quite model-independent. An increased sensitivity can
be achieved when targeting models specific for a source or a source class,
i.e., by adding model-dependent information in the likelihood. E.g. one can
study catalogs of similar object using so-called stacking of sources, see e.g.
[159] and [160]. Further, many astrophysical objects have time-dependent
213
E2dN/dE [10−10 TeV cm−2 s−1]
18
δ = −60◦
√
t @ 1 year
δ = −15◦
√
t @ 1 year
16
14
12
10
8
6
4
2
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15
Years of IceCube
−Eν /10 TeV
Figure 11.1. LESE point source sensitivity at two declinations for an E−2
ν e
spectrum, as a function of years of √
data-taking. Due to the low background rate, the
sensitivity improves faster than the t limit (dashed line). Adding four more years of
data will lead to a sensitivity that is a factor 3-4 times better than the results presented
in this thesis.
emission. By limiting the sample to a specific time interval the background
can be suppressed substantially. Plans also exist to use the LESE event sample
to search for extended point sources and for neutrino emission in the Galactic
plane.
Ultimately, more data are needed to identify astrophysical objects with neutrino emission. Three more years of IceCube data with the 86-string configuration are ready to be incorporated into this analysis. The sensitivity is expected
to improve slightly faster than square-root of time, due to the relatively low
background rate. This is illustrated for two different declinations in figure
11.1. The improvement can be accelerated by developments in background
rejection techniques and improvements in the angular reconstruction for lowenergy events. Further, a small gain can be achieved by expanding the likelihood description to include the individual probability for events to be starting
inside the detector volume, determined from reconstructions.
Although IceCube’s observation of a diffuse flux of astrophysical neutrinos
marks the beginning of neutrino astronomy, it seems IceCube is not yet sensitive to the neutrino point source flux levels expected on the basis of γ-ray
experiments, particular at the lowest energies. The limits presented in this thesis are about one order of magnitude higher than what is achieved with the
ANTARES neutrino telescope in the Mediterranean. For studies in the SH,
214
Figure 11.2. Proposed IceCube extension, IceCube Gen-2.
ANTARES benefits by using the Earth to remove atmospheric muons. Further, photons in sea water scatter less than in glacial ice but are more rapidly
absorbed. This lack of late scattered photons results in improved reconstruction accuracy for ANTARES. The joint efforts of the IceCube collaboration
in the Southern hemisphere now give us sensitivities for point sources in the
energy range from about 100 GeV to EeV, getting closer to the ANTARES
sensitivities.
Currently plans exist to expand the IceCube Neutrino Observatory in several ways. Figure 11.2 shows an illustration of the proposed IceCube Gen-2
facility [161], including both a low-energy extension, The Precision IceCube
Next Generation Upgrade (PINGU) shown in purple (inside the green region), which is an even denser central sub-array inside DeepCore, and a highenergy extension surrounding IceCube reaching an instrumented volume of
O(10) km3 , shown in blue. The current IceCube (DeepCore) detector is shown
in red (green). The physics goals for PINGU include determining the neutrino
mass hierarchy, studying neutrino oscillations, and dark matter. The highenergy extension will focus on energies above 100 TeV to characterize the
observed diffuse flux of neutrinos as well as to continue the search for astrophysical point sources of neutrinos. Plans also exist to add an extended
surface veto that will help in the rejection of the atmospheric background for
the searches in the SH. Further, the proposed KM3NeT Neutrino Telescope in
the Mediterranean sea, hopes to reach the sensitivity required to observe TeV
Galactic sources in the SH within a couple of years of livetime [162].
The origin of the high-energy cosmic rays remain a mystery, but it continues
to be an exiting period for neutrino astronomy, as we continue to search for
the first point source of high-energy neutrinos.
215
Summary in Swedish
Sammanfattning på svenska
Avhandlingens titel: En upptäcksfärd i Universum med hjälp av neutriner En jakt på punktkällor på södra stjärnhimlen i data från neutrinoteleskopet
IceCube
Neutriner är unika eftersom att de bara interagerar med andra partiklar
genom den s.k. svaga växelverkan. Det gör dem särskilt lämpliga att använda
som budbärare av kosmisk information.
Instrumentet IceCube är unikt i sitt slag och har ett brett forskningsområde
som omfattar forskning både på den största skalan i Universum där vi finner
bl.a. den mörka materian och källorna till kosmisk strålning samt på den minsta skalan där vi undersöker egenskaperna hos naturens minsta beståndsdelar
som t.ex. neutriners oscillationer.
I denna avhandling undersöker vi förekomsten av neutriner från södra stjärnhimlen av lägre energi än tidigare (vi bortser här från neutriner i MeV-området)
och är den första i sitt slag. De källor som producerar kosmisk strålning vid
intermediära energier bör även kunna producera neutriner som skulle ha missats av tidigare analyser men skulle kunna observeras med de metoder som
beskrivs i denna avhandling. Neutrinerna kan tydligt visa var denna del av den
kosmiska strålningen bildas och/eller accelereras, en fråga som har intresserat
det vetenskapliga samfundet i över 100 år.
Var kommer den kosmisk strålningen ifrån?
Jorden bombarderas ständigt av laddade partiklar, s.k. kosmiska partiklar,
främst protoner, som produceras någonstans ute i Universum. Deras energier
kan uppgå till flera miljoner gånger högre än vad vi själva kan åstadkomma på
Jorden, t.ex. vid acceleratorn LHC vid CERN. Trots att förekomsten av dessa
partiklar upptäcktes redan 1912 av den österrikisk-amerikanska fysikern Victor Hess, vet vi väldigt lite om hur och var de skapades. Det beror främst på
att partiklarna är elektriskt laddade och därför böjer av i de magnetiska fälten i
Universum (och därmed inte länge pekar tillbaka till den plats där de skapades
och/eller accelererades) samt att antalet partiklar som når Jorden snabbt avtar
med stigande partikelenergi.
217
I Universum finns en mängd olika fenomen som i det närmaste kan betecknas som extrema: kolliderande galaxer, svarta hål, exploderande stjärnor, m.m.
Vid dessa frigörs en ofantlig mängd energi som i teorin skulle kunna skapa och
accelerera kosmiska partiklar. Det råder ingen brist på potentiella kandidater
men än har man inte på ett övertygande sätt lyckats påvisa en sådan miljö där
detta sker. Svaret på var de laddade partiklarna kommer ifrån och hur de accelereras är en av de mest intressanta och uppmärksammade frågorna inom fysik
idag.
Vad gör neutriner så intressanta?
Neutriner är unika budbärare av information. Eftersom de saknar elektrisk
laddning kan de färdas utan att böja av i Universums magnetfält. Dessutom
har de en väldigt svag växelverkan med annan materia, vilket gör att de inte
absorberas eller växelverkar på annat sätt under sin resa igenom Universum.
De färdas i en rak linje från den plats de skapas rakt ut i Universum. Genom
att observera de neutriner som når Jorden kan vi därför för första gången få
en bild av vad som sker i några av Universums mest extrema och energitäta
platser.
Neutriner fungerar dessutom som ett lackmustest för att skilja på två olika
typer av acceleration, s.k. leptonisk och hadronisk. Den senare till skillnad
från den första är nämligen garanterad att ge ett stort flöde av neutriner som resultatet av en kedja av sönderfall av tunga mesoner, främst pioner och kaoner.
Genom att lokalisera neutrinostrålning kan vi alltså utforska platser där protoner och tyngre hadroner skapas och/eller accelereras.
En observation av neutriner möjliggör också bildandet av en unik helhetsbild där vi kan knyta samman kunskap från olika kanaler (t.ex. med observationer som gjorts i det elektromagnetiska spektrat (Röntgen-, mikrovågs-,
radiostrålning, etc.)) för att på så vis skapa oss en större förståelse för redan
bekanta fenomen som t.ex. aktiva galaxkärnor (AGNs).
IceCube-kollaborationen publicerade 2013 en artikel i Science Magazine
om upptäckten av de första högenergetiska neutrinerna med ursprung någonstans utanför vårt solsystem. Upptäckten väckte stor uppmärksamhet och blev
prisad som årets fysikgenombrott av den brittiska tidskriften Physics World
samt gav upphov till hundratals publicerade vetenskapliga artiklar som vidare
försökte förklara deras ursprung. Analysen som presenteras i denna avhandling rör neutriner av något lägre energi, vilka är särskilt svåra att fånga då de
lätt förväxlas med partiklar från bakgrundsprocesser i Jordens atmosfär.
Hur detekterar vi neutriner?
Eftersom neutriner endast växelverkan med den svaga växelverkan är de väldigt
svåra att påvisa. Oftast krävs enorma detektorer och mätningar i flera år. ‘The
218
Figure 11.3. En illustration av ‘The IceCube Neutrino Observatory’ belägen i
glaciären under den geografiska Sydpolen på Antarktis. Credit: Credit: Jamie
Yang/WIPAC.
IceCube Neutrino Observatory’ är ett unikt neutrinoteleskop beläget på 1,5
till 2,5 km djup under ytan vid den geografiska Sydpolen på Antarktis. Det
består av totalt 5160 optiska moduler som tillsammans med isen skapar en kubikkilometer stor instrumenterad volym i tre dimensioner, se illustrationen i
figur 11.3.
Det internationella projektet IceCube inkluderar runt 300 forskare från ca
40 institutioner i 12 länder. I Sverige finns det två forskningsgrupper: en vid
Uppsala universitet och en vid Stockholms universitet. Utöver de resurser som
finns på universiteten, stöds projektet av NSF (National Science Foundation) i
USA samt bl.a. Vetenskapsrådet och Knut och Alice Wallenbergs Stiftelse.
De optiska modulerna i IceCube, s.k. DOM, detekterar ljus (s.k. Cherenkovstrålning) som skapas då elektriskt laddade partiklar färdas igenom isen med
en hastighet över ljusets hastighet i is. Sådana laddade partiklar skapas bl.a.
då neutriner växelverkar med atomkärnor i isen. Men laddade partiklar skapas
också då kosmisk strålning växelverkan med partiklar i Jordens atmosfär.
Till stor del handlar avhandlingen om att skilja på dessa två olika typer av
händelser i detektorn genom att konstruera variabler och algoritmer som väljer
ut de händelser som liknar signal som har sitt ursprung utanför solsystemet.
219
Detta görs genom att analysera vilka av de optiska modulerna som träffats,
vid vilken tidpunkt samt hur mycket energi som deponerades i modulens omgivning. Med hjälp av den informationen kan vi återskapa den ursprungliga
neutrinens riktning och energi. Analysen är mycket komplex då bakgrunden
av ointressanta händelser är flera miljoner gånger högre än antalet intressanta
signalhändelser.
Analysmetod och Resultat
I analysen har vi fokuserat på händelser som startar i en inre delmängd av detektorn. Detta görs genom att utnyttja de yttersta lagren av optiska moduler
som ett veto. På så vis kan vi skilja ut händelser där en neutrino växelverkar
och bildar en laddad partikel inuti detektorn från händelser där en laddad partikel åker igenom vetolagret och där avger spår i form av ljus som fångas
upp av modulerna. Tidigare analyser har fokuserat på högre energier för att
därigenom kunna reducera den kvarvarande bakgrunden av laddade partiklar
från atmosfären men har då förlorat all känslighet för neutriner vid lägre energier.
Analysen involverar flera olika steg i vilka vi fokuserar på olika egenskaper,
bl.a. kvalitén på riktningsinformationen samt sannolikheten för s.k. läckage,
där en laddad partikel smiter in igenom vetot utan att där lämna några spår efter
sig, och därmed härmar en startande signalhändelse. Ett av stegen involverar
en s.k. maskininlärningsalgoritm av typen BDT som kan tränas att känna igen
signal och bakgrund genom att kombinera flera olika variabler och behandla
dem simultant.
För att välja ut de variabler som vi använder för att separera signalen från
bakgrunden av laddade partiklar, främst myoner (tunga elektronliknande partiklar), har vi använt oss av mycket detaljerade simuleringar av både signal
och bakgrund. Den simulerade bakgrunden har dessutom jämförts med ett
litet urval av experimentell data för att verifiera simuleringens tillförlitlighet.
Simuleringen tar bl.a. hänsyn till partiklarnas växelverkan, ljusets spridning i
isen samt detektorernas reaktion på ljuset samt deras brus.
Selektionen av händelser är unik också i det att vi accepterar signalhändelser från hela södra halvklotet istället för som i tidigare analyser vid lägre energier fokusera på mindre lokaliserade områden. En av de stora utmaningarna
här har varit att kunna hantera den enorma mängd data som då samlas in. För
att klara av den mängden händelser har ett nytt dataformat skapats vilket komprimerar all insamlad data, med en minimal förlust av information. Tillsammans med det filter som vi utvecklat för ändamålet kunde händelserna samlas
in och sparas i en gigantisk händelsebank med drygt 6 miljarder händelser
per år (2011-). Ett flertal forskare inom IceCube har uttryckt intresse för att
använda denna händelsebank och en analys vid något högre energier har sitt
ursprung i samma selektion av händelser.
220
I analysens sista steg använder vi händelsernas riktning i kombination med
deponerad energi i ett stort sannolikhetstest som testar om den experimentella
datamängden har en komponent från punktlika neutrinokällor mot hypotesen
att den är isotropt fördelad och således inte innehåller neutrinokällor starkare
än vad vår känslighet kan påvisa. Känsligheten för flera olika typer av signal
beräknades och hela analysen genomgick en rigorös kontroll inom IceCubekollaborationen innan vi fick tillåtelse att titta på den slutgiltiga fördelningen
av händelser med en livstid av ca 329 dagar.
Ingen lokaliserad källa av neutriner kunde påvisas i händelsebanken varför
gränser beräknades för dess möjliga flöde av partiklar som funktion av dess
koordinater på södra stjärnhimlen. Dessutom beräknades gränser för 96 st
för-definierade källor som valts ut då de observerats i mycket högenergetisk
elektromagnetisk strålning av andra experiment.
Analysen som presenteras i denna avhandling ger banbrytande bevis för att
den metoden som användes fungerar. Genom att utnyttja de yttre delarna av
den instrumenterade volymen av ljuskänsliga detektorer kan vi nå en mycket
hög grad av känslighet för neutriner från den södra stjärnhimlen även vid låga
energier. Ytterligare tre år av data från IceCube finns att tillgå och ger en
faktor 3-4 gånger högre känslighet.
221
Appendix A.
BDT Input Variables
This appendix contains plots of all 22 variables considered in the initial BDT
described in section 10.4.6. The legend is described in section 10.4. Data from
‘Burnsample-BIG’ is used to represent experimental data.
30
16
14
25
Rate [mHz]
Rate [mHz]
12
20
15
10
10
8
6
4
5
2
0
−600
−400
−200
0
200
400
0
600
0
finiteReco z − coord. [m]
100
200
300
400
500
600
finiteReco xy − distance [m]
(a) z-coordinate of reconstructed
interaction vertex
(b) Distance in xy-plane of
reconstructed interaction vertex
45
25
40
20
30
Rate [mHz]
Rate [mHz]
35
25
20
15
10
15
10
5
5
0
10
20
30
40
50
60
0
0
2
LLHR Starting Track
(c) LLHR Starting Track
4
6
8
10
finiteReco rlogl
(d) rlogl of reconstructed
interaction vertex
Figure A.1. BDT variables
223
20
25
15
Rate [mHz]
Rate [mHz]
20
15
10
10
5
5
0
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0
0.40
−0.4
(a) Magnitude of vLineFit
Rate [mHz]
Rate [mHz]
0.4
20
10
5
15
10
5
0
−0.4
−0.2
0.0
0.2
0
−0.5
0.4
y − component of vLineFit [m/ns]
35
12
30
10
25
Rate [mHz]
14
8
6
2
5
20
30
40
50
60
70
−0.2
−0.1
0.0
80
90
0
6
7
8
θ [◦ ]
9
10
11
12
rlogl
(e) zenith angle of SPEFit2
0.1
0.2
15
10
10
−0.3
20
4
0
−0.4
z − component of vLineFit [m/ns]
(d) z-component of vLineFit
(c) y-component of vLineFit
Rate [mHz]
0.2
25
15
(f) rlogl of SPEFit2
Figure A.2. BDT variables cont.
224
0.0
(b) x-component of vLineFit
20
0
−0.2
x − component of vLineFit [m/ns]
Magnitude of vLineFit [m/ns]
13
14
15
40
102
10
10
0
30
Rate [mHz]
Rate [mHz]
35
1
10−1
25
20
15
10
5
10−2
−400
−200
0
200
0
400
z − coord. of PTP − point [m]
0
100
200
300
400
500
600
xy − distance of PTP − point [m]
(a) z-coordinate of PTP
(b) Distance in xy-plane of PTP
25
101
Rate [mHz]
Rate [mHz]
20
15
10
100
10−1
5
0
10−2
0
20
40
60
80
100
0
500
1000
1500
2000
Accumulation [ns]
Total charge [p.e.]
(d) Accumulation time of pulses
(TWSRTOfflinePulses)
(c) Total charge
(TWSRTOfflinePulses)
20
30
25
Rate [mHz]
Rate [mHz]
15
10
20
15
10
5
5
0
−200
−150
−100
−50
0
ZTravel [m]
(e) ZTravel
50
100
0
−1.0
−0.5
0.0
0.5
1.0
Hit distribution smoothness
(f) Smoothness of pulses in
(TWSRTOfflinePulses)
Figure A.3. BDT variables cont.
225
60
25
50
20
40
Rate [mHz]
Rate [mHz]
30
15
10
5
0
30
20
10
0
20
40
60
80
0
100
0
20
Charge of late pulses [p.e.]
40
60
80
100
No. of direct pulses (D) [p.e.]
(a) Total charge of late pulses
(tres > 15 ns)
(b) Number of direct pulses
(tres category D)
45
101
40
Rate [mHz]
Rate [mHz]
35
30
25
20
15
100
10−1
10−2
10
5
0
10−3
0
200
400
600
800
1000
0
200
25
1000
1200
25
Rate [mHz]
Rate [mHz]
800
30
20
15
10
5
20
15
10
5
−400
−300
−200
−100
0
z − coord. of COG [m]
(e) z-coordinate of COG
(TWSRTOfflinePulses)
100
200
0
0
50
100
150
200
σz of COG [m]
(f) Spread in z-coordinate of COG
(TWSRTOfflinePulses)
Figure A.4. BDT variables cont.
226
600
(d) Distance between PTP and
reconstructed interaction vertex
(c) Distance between first hit
and PTP
0
−500
400
Length fR − PTP [m]
Dist. between first hit and PTP − point [m]
250
Appendix B.
Result Tables
This appendix contains the tabulated results for sources in the a priori search
list of 96 astrophysical objects1 , see section 10.6.4.
1 Note
that the point source likelihood algorithm did not converge for ‘SNR G015.4+00.1’ and
‘SHBL J001355.9-185406’. No limits are reported for these sources.
227
228
R.A. [◦ ]
282.2
282.0
281.6
280.8
280.3
194.1
279.4
306.5
212.2
278.6
228.3
278.3
Source
IGR J18490-0000
HESS J1848-018
HESS J1846-029
HESS J1843-033
HESS J1841-055
3C 279
HESS J1837-069
QSO 2022-077
PKS 1406-076
HESS J1834-087
PKS 1510-089
HESS J1832-093
0.0
-1.8
-2.9
-3.3
-5.5
-5.8
-6.9
-7.6
-7.9
-8.8
-9.1
-9.4
dec. [◦ ]
1.48
0.46
0.61
2.61
0.55
0.53
1.00
0.50
− log10 (p-val.)
0.0
0.0
0.0
0.0
5.0
3.1
4.2
17.3
1.5
3.6
6.3
3.3
n̂S
2.4
2.7
2.6
3.5
2.0
3.5
4.5
3.4
γ̂
E−2
ν
9.36·10−10
5.12·10−10
3.53·10−10
3.01·10−10
5.01·10−10
2.80·10−10
3.26·10−10
6.74·10−10
3.12·10−10
3.36·10−10
4.53·10−10
3.41·10−10
−Eν /10 TeV
E−2
ν e
2.00·10−9
1.08·10−9
7.38·10−10
6.59·10−10
1.19·10−9
6.35·10−10
7.41·10−10
1.56·10−9
6.88·10−10
7.22·10−10
1.02·10−9
7.66·10−10
−Eν /1 TeV
E−2
ν e
7.07·10−9
3.80·10−9
2.63·10−9
2.38·10−9
4.52·10−9
2.38·10−9
2.76·10−9
5.81·10−9
2.60·10−9
2.54·10−9
3.79·10−9
2.63·10−9
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
1.62·10−9
8.55·10−10
5.91·10−10
5.32·10−10
9.88·10−10
5.14·10−10
5.77·10−10
1.27·10−9
5.55·10−10
5.72·10−10
8.16·10−10
5.65·10−10
Table 11.1. Results for astrophysical sources in the a priori search list. Sources are shown in descending order of declination. The p-values
represent the pre-trial probability of compatibility with the background-only hypothesis. The n̂S and γ̂ columns give the best-fit values for the
number of signal events and the spectral index of an unbroken power-law, respectively. When n̂S < 0.05, no p-value or γ̂ are reported. The last
four columns show the 90% C.L. upper flux limits for νμ + ν̄μ , based on the classical approach [157], for different source spectra normalizations
in units TeVγ−1 cm−2 s−1 . This table continues on the following pages.
229
R.A. [◦ ]
277.8
112.6
57.3
263.2
276.4
276.6
274.6
273.3
3.5
272.6
8.4
272.2
271.1
270.4
74.3
165.8
Source
HESS J1831-098
PKS 0727-11
1ES 0347-121
QSO 1730-130
HESS J1825-137
LS 5039
SNR G015.4+00.1
HESS J1813-178
SHBL J001355.9-185406
HESS J1809-193
KUV 00311-1938
HESS J1808-204
HESS J1804-216
W 28
PKS 0454-234
1ES 1101-232
-9.9
-11.7
-11.9
-13.1
-13.9
-14.8
-15.5
-17.8
-18.9
-19.2
-19.4
-20.4
-21.7
-23.3
-23.5
-23.5
dec. [◦ ]
0.42
0.20
0.43
0.56
0.59
1.40
0.62
0.51
0.83
0.72
0.32
− log10 (p-val.)
2.7
0.2
2.6
2.7
0.0
0.0
0.0
0.0
0.0
1.5
7.1
2.1
2.1
3.0
1.2
0.1
n̂S
3.0
3.8
3.9
4.5
4.6
3.4
4.5
4.3
4.3
4.9
5.3
γ̂
E−2
ν
3.30·10−10
2.99·10−10
3.68·10−10
4.28·10−10
3.13·10−10
3.08·10−10
3.43·10−10
5.57·10−10
8.25·10−10
5.59·10−10
5.30·10−10
7.18·10−10
6.79·10−10
4.28·10−10
Table 11.2. Results for astrophysical sources on the a priori search list. Table continued.
−Eν /10 TeV
E−2
ν e
6.97·10−10
6.67·10−10
8.12·10−10
1.04·10−9
7.56·10−10
8.10·10−10
9.60·10−10
1.57·10−9
2.47·10−9
1.68·10−9
1.66·10−9
2.36·10−9
2.23·10−9
1.33·10−9
−Eν /1 TeV
E−2
ν e
2.47·10−9
2.59·10−9
3.13·10−9
4.08·10−9
3.14·10−9
3.51·10−9
4.56·10−9
8.33·10−9
1.33·10−8
9.60·10−9
9.75·10−9
1.59·10−8
1.47·10−8
8.70·10−9
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
5.54·10−10
5.50·10−10
6.60·10−10
8.68·10−10
6.35·10−10
7.14·10−10
8.91·10−10
1.63·10−9
2.65·10−9
1.86·10−9
1.89·10−9
3.02·10−9
2.79·10−9
1.63·10−9
230
R.A. [◦ ]
270.4
270.2
45.8
229.4
266.9
11.9
266.8
266.4
246.5
329.8
265.3
266.2
359.8
152.6
87.6
262.4
263.1
Source
HESS J1800-240A
HESS J1800-240B
PKS 0301-243
AP Lib
Terzan 5
NGC 253
SNR G000.9+00.1
Galactic Centre
PKS 1622-297
PKS 2155-304
HESS J1741-302
HESS J1745-303
H 2356-309
1RXS J101015.9-311909
PKS 0548-322
HESS J1729-345
HESS J1731-347
-24.0
-24.1
-24.1
-24.4
-24.8
-25.3
-28.2
-29.0
-29.8
-30.3
-30.3
-30.3
-30.6
-31.3
-32.3
-34.5
-34.8
dec. [◦ ]
0.55
0.54
0.33
0.40
1.00
0.41
0.96
0.63
0.36
-
− log10 (p-val.)
1.6
1.5
0.1
0.0
0.4
5.3
0.0
0.1
0.0
0.0
2.4
1.4
0.0
0.0
0.0
0.1
0.0
n̂S
4.3
4.3
4.0
4.2
3.3
2.3
2.2
2.3
3.8
-
γ̂
E−2
ν
5.94·10−10
5.91·10−10
4.48·10−10
4.32·10−10
4.59·10−10
7.10·10−10
4.39·10−10
4.34·10−10
3.92·10−10
3.90·10−10
5.82·10−10
4.84·10−10
3.75·10−10
3.67·10−10
3.80·10−10
3.36·10−10
3.36·10−10
Table 11.3. Results for astrophysical sources on the a priori search list. Table continued.
−Eν /10 TeV
E−2
ν e
1.96·10−9
1.94·10−9
1.40·10−9
1.42·10−9
1.63·10−9
2.83·10−9
1.61·10−9
1.64·10−9
1.51·10−9
1.51·10−9
2.54·10−9
1.98·10−9
1.46·10−9
1.47·10−9
1.43·10−9
1.26·10−9
1.24·10−9
−Eν /1 TeV
E−2
ν e
1.29·10−8
1.34·10−8
9.45·10−9
9.49·10−9
1.11·10−8
2.05·10−8
1.38·10−8
1.48·10−8
1.43·10−8
1.46·10−8
2.56·10−8
1.96·10−8
1.43·10−8
1.42·10−8
1.43·10−8
1.40·10−8
1.33·10−8
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
2.46·10−9
2.45·10−9
1.79·10−9
1.78·10−9
2.03·10−9
3.88·10−9
2.47·10−9
2.66·10−9
2.55·10−9
2.55·10−9
4.44·10−9
3.47·10−9
2.58·10−9
2.55·10−9
2.57·10−9
2.37·10−9
2.32·10−9
231
R.A. [◦ ]
224.4
259.5
67.2
258.5
259.5
258.6
258.4
257.1
225.5
226.0
255.6
198.7
201.3
72.3
84.8
257.0
128.9
Source
PKS 1454-354
SNR G349.7+00.2
PKS 0426-380
CTB 37B
HESS J1718-385
CTB 37A
RX J1713.7-3946
HESS J1708-410
SN 1006-SW
SN 1006-NE
HESS J1702-420
1ES 1312-423
Centaurus A
PKS 0447-439
PKS 0537-441
HESS J1708-443
Vela Pulsar
-35.7
-37.5
-37.9
-38.1
-38.5
-38.6
-39.7
-41.0
-41.1
-41.8
-42.0
-42.6
-43.0
-43.8
-44.1
-44.3
-45.2
dec. [◦ ]
0.74
0.54
0.72
1.58
0.94
0.67
0.69
0.51
0.63
0.50
0.83
0.77
0.62
-
− log10 (p-val.)
0.0
4.2
2.0
4.0
5.9
4.9
3.0
2.0
0.0
1.4
1.2
0.9
1.2
0.0
3.1
0.9
0.0
n̂S
3.4
2.5
3.2
3.4
3.3
2.4
2.1
3.4
2.1
3.3
4.3
3.3
1.8
-
γ̂
E−2
ν
3.13·10−10
5.07·10−10
4.40·10−10
5.14·10−10
7.62·10−10
5.84·10−10
4.85·10−10
5.12·10−10
3.70·10−10
4.40·10−10
4.97·10−10
4.56·10−10
5.80·10−10
3.96·10−10
5.70·10−10
5.16·10−10
4.11·10−10
Table 11.4. Results for astrophysical sources on the a priori search list. Table continued.
−Eν /10 TeV
E−2
ν e
1.17·10−9
1.92·10−9
1.60·10−9
1.90·10−9
2.80·10−9
2.22·10−9
1.79·10−9
1.86·10−9
1.25·10−9
1.56·10−9
1.78·10−9
1.60·10−9
2.07·10−9
1.30·10−9
2.04·10−9
1.76·10−9
1.31·10−9
−Eν /1 TeV
E−2
ν e
1.27·10−8
2.25·10−8
1.89·10−8
2.20·10−8
3.21·10−8
2.44·10−8
2.01·10−8
2.10·10−8
1.36·10−8
1.74·10−8
2.02·10−8
1.72·10−8
2.34·10−8
1.39·10−8
2.18·10−8
1.94·10−8
1.44·10−8
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
2.25·10−9
3.89·10−9
3.26·10−9
3.83·10−9
5.58·10−9
4.18·10−9
3.59·10−9
3.68·10−9
2.41·10−9
3.04·10−9
3.54·10−9
3.02·10−9
4.09·10−9
2.51·10−9
3.96·10−9
3.51·10−9
2.62·10−9
232
R.A. [◦ ]
128.8
251.8
250.2
133.0
250.2
248.8
248.0
255.6
302.3
246.5
244.1
243.5
238.6
230.2
155.8
225.9
156.6
Source
Vela X
Westerlund 1
HESS J1641-463
RX J0852.0-4622
HESS J1640-465
HESS J1634-472
HESS J1632-478
GX 339-4
PKS 2005-489
HESS J1626-490
HESS J1616-508
HESS J1614-518
SNR G327.1-01.1
Cir X-1
Westerlund 2
HESS J1503-582
HESS J1026-582
-45.6
-45.9
-46.3
-46.4
-46.5
-47.4
-47.7
-48.7
-48.8
-49.1
-51.0
-51.8
-55.0
-57.2
-57.8
-58.2
-58.2
dec. [◦ ]
0.43
0.48
0.95
0.55
0.62
0.96
1.04
− log10 (p-val.)
0.0
0.2
0.0
0.5
0.0
0.0
0.0
0.0
0.0
1.8
0.9
1.1
0.0
0.0
1.6
0.0
2.0
n̂S
1.9
3.9
3.0
2.7
2.7
2.9
3.0
γ̂
E−2
ν
4.11·10−10
4.18·10−10
3.98·10−10
4.49·10−10
4.00·10−10
4.04·10−10
4.08·10−10
4.23·10−10
4.17·10−10
6.37·10−10
4.77·10−10
5.03·10−10
4.18·10−10
4.55·10−10
6.46·10−10
4.67·10−10
6.67·10−10
Table 11.5. Results for astrophysical sources on the a priori search list. Table continued.
−Eν /10 TeV
E−2
ν e
1.34·10−9
1.43·10−9
1.33·10−9
1.54·10−9
1.35·10−9
1.38·10−9
1.40·10−9
1.47·10−9
1.43·10−9
2.36·10−9
1.75·10−9
1.87·10−9
1.48·10−9
1.62·10−9
2.51·10−9
1.67·10−9
2.61·10−9
−Eν /1 TeV
E−2
ν e
1.49·10−8
1.55·10−8
1.48·10−8
1.73·10−8
1.46·10−8
1.52·10−8
1.49·10−8
1.56·10−8
1.57·10−8
2.58·10−8
1.92·10−8
2.22·10−8
1.67·10−8
1.99·10−8
3.19·10−8
2.12·10−8
3.37·10−8
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
2.64·10−9
2.77·10−9
2.60·10−9
3.09·10−9
2.62·10−9
2.72·10−9
2.73·10−9
2.76·10−9
2.78·10−9
4.74·10−9
3.55·10−9
3.77·10−9
2.96·10−9
3.32·10−9
5.16·10−9
3.45·10−9
5.39·10−9
233
R.A. [◦ ]
154.3
228.5
224.5
264.3
215.1
224.6
217.0
214.5
169.7
226.7
220.6
195.6
195.7
209.1
83.9
84.3
81.3
Source
HESS J1018-589
MSH 15-52
SNR G318.2+00.1
ESO 139-G12
Kookaburra (PWN)
HESS J1458-608
HESS J1427-608
Kookaburra (Rabbit)
SNR G292.2-00.5
HESS J1507-622
RCW 86
HESS J1303-631
PSR B1259-63
HESS J1356-645
30 Dor-C
LHA 120
LMC N132D
-59.0
-59.2
-59.4
-59.9
-60.7
-60.8
-60.9
-61.0
-61.4
-62.3
-62.5
-63.2
-63.8
-64.5
-69.1
-69.1
-69.6
dec. [◦ ]
0.73
0.33
0.51
1.04
1.01
1.03
− log10 (p-val.)
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.1
1.2
0.0
2.3
2.2
2.1
n̂S
2.7
3.4
3.2
3.6
3.6
3.7
γ̂
E−2
ν
5.77·10−10
4.72·10−10
4.69·10−10
4.62·10−10
4.72·10−10
4.73·10−10
4.75·10−10
4.73·10−10
4.91·10−10
4.60·10−10
4.63·10−10
4.75·10−10
6.35·10−10
4.81·10−10
9.14·10−10
9.19·10−10
9.41·10−10
Table 11.6. Results for astrophysical sources on the a priori search list. Table continued.
−Eν /10 TeV
E−2
ν e
2.18·10−9
1.74·10−9
1.72·10−9
1.66·10−9
1.70·10−9
1.71·10−9
1.70·10−9
1.70·10−9
1.71·10−9
1.70·10−9
1.70·10−9
1.76·10−9
2.48·10−9
1.74·10−9
3.65·10−9
3.56·10−9
3.64·10−9
−Eν /1 TeV
E−2
ν e
2.89·10−8
2.12·10−8
2.13·10−8
2.13·10−8
2.26·10−8
2.27·10−8
2.28·10−8
2.28·10−8
2.25·10−8
2.26·10−8
2.30·10−8
2.39·10−8
3.41·10−8
2.38·10−8
5.35·10−8
5.33·10−8
5.38·10−8
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
E−3
ν
4.61·10−9
3.51·10−9
3.63·10−9
3.49·10−9
3.61·10−9
3.63·10−9
3.64·10−9
3.64·10−9
3.59·10−9
3.51·10−9
3.55·10−9
3.71·10−9
5.23·10−9
3.70·10−9
7.92·10−9
7.78·10−9
7.85·10−9
References
[1] G. Cocconi and P. Morrison. Searching for interstellar communications.
Nature, 184(4690):844–846, 1959.
[2] K.A. Olive et al. (Particle Data Group). Review of particle physics. Chin.
Phys. C, 010009, 2014.
[3] M. G. Aartsen et al. Evidence for High-Energy Extraterrestrial Neutrinos at
the IceCube Detector. Science, 342:1242856, 2013.
[4] M. E. Peskin and D. V. Shroeder. An Introduction to Quantum Field Theory.
Westview Press, 1995.
[5] G. Aad et al. Observation of a new particle in the search for the Standard
Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett.,
B716:1–29, 2012.
[6] S. Chatrchyan et al. Observation of a new boson at a mass of 125 GeV with
the CMS experiment at the LHC. Phys. Lett., B716:30–61, 2012.
[7] G.T. Bodwin et al. Quarkonium at the Frontiers of High Energy Physics: A
Snowmass White Paper. In Community Summer Study 2013: Snowmass on the
Mississippi (CSS2013) Minneapolis, MN, USA, July 29-August 6, 2013, 2013.
[8] W. Ochs. The Status of Glueballs. J. Phys., G40:043001, 2013.
[9] M.P. Stephen. A Supersymmetry primer. Adv. Ser. Direct. High Energy Phys.,
18(1), 1997.
[10] E. Noether. Invariante Variationsprobleme. Nachr. d. Königl. Gesellsch. d.
Wiss. zu Göttingen, Math-phys. Klasse, 235(57), 1918.
[11] R.K. Kutschke. The Mu2e Experiment at Fermilab. Proceedings for a poster
presentation at the XXXI Conference on Physics in Collision, Vancouver,
Canada, 2011.
[12] M. Thomson. Modern Particle Physics. Cambridge University Press, 2013.
[13] F.J. Hasert et al. Observation of neutrino-like interactions without muon or
electron in the Gargamelle neutrino experiment. Nuclear Physics B, 73(1):1 –
22, 1974.
[14] M. Banner et al. Observation of single isolated electrons of high transverse
momentum in events with missing transverse energy at the CERN p̄p collider.
Physics Letters B, 122(5–6):476 – 485, 1983.
[15] G. Arnison et al. Experimental observation of isolated large transverse energy
electrons with associated missing energy at s=540 GeV. Physics Letters B,
122(1):103 – 116, 1983.
235
[16] P. Bagnaia et al. Evidence for Z0 —> e+ e- at the CERN anti-p p Collider.
Phys. Lett., B129:130–140, 1983.
[17] J. Goldstone, A. Salam, and S. Weinberg. Broken Symmetries. Phys.Rev.,
127:965–970, 1962.
[18] "Wikimedia user Nicholas Moreau". Wikisource logo, 2006. Accessed August
28, 2015. Used under Creative Commons Attributions 3.0 Unported licence.
Horizontal wiggly black line and text fields including arrows have been added.
[19] F. Zwicky. On the Masses of Nebulae and of Clusters of Nebulae. Astrophys.
J., 86:217, 1937.
[20] V. Rubin, N. Thonnaard, and W.K. Ford Jr. Rotational properties of 21 SC
galaxies with a large range of luminosities and radii, from NGC 4605 (R =
4kpc) to UGC 2885 (R = 122 kpc). Astrophys. J., 238:471–487, 1980.
[21] E. Corbelli and P. Salucci. The extended rotation curve and the dark matter
halo of M33. Monthly Notices of the Royal Astronomical Society,
311(2):441–447, 2000.
[22] F. Iocco, M. Pato, and G. Bertone. Evidence for dark matter in the inner Milky
Way. Nature Physics, 11(3):245–248, 2015.
[23] D. Clowe et al. A Direct Empirical Proof of the Existence of Dark Matter.
Astrophysical Journal Letters, 648(2):L109–L113, 2006.
[24] P.A.R. Ade et al. Planck 2015 results. XIII. Cosmological parameters.
arXiv.org, 2015. arXiv:1502.01589 [astro-ph.CO].
[25] L. Bergström and A. Goobar. Cosmology and particle astrophysics. Springer
Science & Business Media, 2006.
[26] F. Close. Neutrino. Oxford University Press, 2010.
[27] K. Zuber. Neutrino Physics, Second Edition. Series in High Energy Physics,
Cosmology and Gravitation. Taylor & Francis, 2011.
[28] T.K. Gaisser. Cosmic Rays and Particle Physics. Cambridge University Press,
1990.
[29] M. Spurio. Particles and Astrophysics: A Multi-Messenger Approach.
Springer International Publishing, 2014.
[30] M.S. Longair. High Energy Astrophysics. Cambridge University Press, 2011.
[31] F. Close. Antimatter. OUP Oxford, 2009.
[32] S. H. Neddermeyer and C. D. Anderson. Note on the Nature of Cosmic Ray
Particles. Phys. Rev., 51:884–886, 1937.
[33] B. Rossi. On the Magnetic Deflection of Cosmic Rays. Phys. Rev.,
36:606–606, Aug 1930.
[34] T.H. Johnson. The Azimuthal Asymmetry of the Cosmic Radiation. Phys.
Rev., 43:834–835, May 1933.
236
[35] L. Alvarez and A. H. Compton. A Positively Charged Component of Cosmic
Rays. Phys. Rev., 43:835–836, May 1933.
[36] M.G. Aartsen et al. Anisotropy in Cosmic-Ray Arrival Directions Using
IceCube and IceTop. In PoS Proceedings of the 34th International Cosmic Ray
Conference, contribution 274, 2015.
[37] E. Battaner, J. Castellano, and M. Masip. Galactic Magnetic Fields and the
Large-Scale Anisotropy at Milagro. The Astrophysical Journal Letters,
703(1):L90, 2009.
[38] V.S. Berezinskii, S.V. Bulanov, V.A. Dogiel, and V.S. Ptuskin. Astrophysics of
cosmic rays. North-Holland, 1990.
[39] A. H. Compton and I. A. Getting. An apparent effect of galactic rotation on the
intensity of cosmic rays. Phys. Rev., 47:817–821, Jun 1935.
[40] R. Abbasi et al. Observation of an Anisotropy in the Galactic Cosmic Ray
arrival direction at 400 TeV with IceCube. Astrophys. J., 746:33, 2012.
[41] A. A. Abdo et al. The Large Scale Cosmic-Ray Anisotropy as Observed with
Milagro. Astrophys. J., 698:2121–2130, 2009.
[42] A. Letessier-Selvon. Ultrahigh Energy Cosmic Rays and the Auger
Observatory. AIP Conf. Proc., 815:73–84, 2006. [,73(2005)].
[43] L. Bergström, J. Edsjö, and G. Zaharijas. Dark Matter Interpretation of Recent
Electron and Positron Data. Phys. Rev. Lett., 103:031103, Jul 2009.
[44] I. V. Moskalenko and A. W. Strong. Production and propagation of cosmic-ray
positrons and electrons. Astrophys. J., 493(2):694, 1998.
[45] P.D. Serpico. Astrophysical models for the origin of the positron ’excess’.
Astropart. Phys., 39-40:2–11, 2012.
[46] C. Spiering. Towards High-Energy Neutrino Astronomy. Eur. Phys. J. H,
37(3):515–565, 2012.
[47] G. T. Zatsepin and V. A. Kuzmin. Upper limit of the spectrum of cosmic rays.
JETP Lett., 4:78–80, 1966. [Pisma Zh. Eksp. Teor. Fiz.4,114(1966)].
[48] K. Greisen. End to the cosmic ray spectrum? Phys. Rev. Lett., 16:748–750,
1966.
[49] R. Abbasi et al. First observation of the Greisen-Zatsepin-Kuzmin
suppression. Phys. Rev. Lett., 100:101101, 2008.
[50] J. Abraham et al. Observation of the suppression of the flux of cosmic rays
above 4 × 1019 eV. Phys. Rev. Lett., 101:061101, 2008.
[51] J. L. Puget, F. W. Stecker, and J. H. Bredekamp. Photonuclear Interactions of
Ultrahigh-Energy Cosmic Rays and their Astrophysical Consequences.
Astrophys. J., 205:638–654, 1976.
[52] J. D. Bray et al. Limit on the ultrahigh-energy neutrino flux from lunar
observations with the Parkes radio telescope. Phys. Rev.,
237
D91(6):063002, 2015.
[53] N.G. Lehtinen et al. FORTE satellite constraints on ultra-high energy cosmic
particle fluxes. Phys. Rev., D69:013008, 2004.
[54] P. Allison et al. Design and Initial Performance of the Askaryan Radio Array
Prototype EeV Neutrino Detector at the South Pole. Astropart. Phys.,
35:457–477, 2012.
[55] P. Allison et al. Performance of two Askaryan Radio Array stations and first
results in the search for ultra-high energy neutrinos. arXiv.org, 2015.
arXiv:1507.08991 [astro-ph.HE].
[56] S.W. Barwick et al. Design and Performance of the ARIANNA Hexagonal
Radio Array Systems. arXiv.org, 2014. arXiv:1410.7369 [astro-ph.IM].
[57] G. A. Askaryan. Excess negative charge of an electron-photon shower and its
coherent radio emission. Sov. Phys. JETP, 14(2):441–443, 1962.
[58] T. Karg. Acoustic Neutrino Detection in Ice: Past, Present, and Future. 5th
International Workshop on Acoustic and Radio EeV Neutrino Detection
Activities (ARENA 2012) Erlangen, Germany, June 29-July 2, 2012. [AIP
Conf. Proc.1535,162(2013)].
[59] A. Mucke et al. On photohadronic processes in astrophysical environments.
Publ. Astron. Soc. Austral., 16:160, 1999.
[60] K.K. Andersen and S.R. Klein. High energy cosmic-ray interactions with
particles from the Sun. Phys. Rev. D, 83:103519, May 2011.
[61] M. Ahlers and K. Murase. Probing the Galactic Origin of the IceCube Excess
with Gamma-Rays. Phys. Rev., D90(2):023010, 2014.
[62] J.G. Learned and K. Mannheim. High-energy neutrino astrophysics. Ann. Rev.
Nucl. Part. Sci., 50:679–749, 2000.
[63] F. Halzen. Cosmic Neutrinos from the Sources of Galactic and Extragalactic
Cosmic Rays. Astrophys. Space Sci., 309:407–414, 2007.
[64] E. Waxman and J.N. Bahcall. High-energy neutrinos from astrophysical
sources: An Upper bound. Phys.Rev., D59:023002, 1999.
[65] J.N. Bahcall and E. Waxman. High-energy astrophysical neutrinos: The Upper
bound is robust. Phys. Rev., D64:023002, 2001.
[66] T. K. Gaisser. Neutrino astronomy: Physics goals, detector parameters. In Talk
given at the OECD Megascience Forum Workshop, Taormina, Sicily, 1997.
[67] S.R. Kelner and F.A. Aharonian. Energy spectra of gamma-rays, electrons and
neutrinos produced at interactions of relativistic protons with low energy
radiation. Phys. Rev., D78:034013, 2008. [Erratum: Phys.
Rev.D82,099901(2010)].
[68] S.R. Kelner, F.A., Aharonian, and V.V. Bugayov. Energy spectra of
gamma-rays, electrons and neutrinos produced at proton-proton interactions in
238
the very high energy regime. Phys. Rev., D74:034018, 2006. [Erratum: Phys.
Rev.D79,039901(2009)].
[69] J.G. Learned and S. Pakvasa. Detecting tau-neutrino oscillations at PeV
energies. Astropart. Phys., 3:267–274, 1995.
[70] S. Hummer, M. Maltoni, W. Winter, and C. Yaguna. Energy dependent
neutrino flavor ratios from cosmic accelerators on the Hillas plot. Astropart.
Phys., 34:205–224, 2010.
[71] M. Ackermann et al. Detection of the Characteristic Pion-Decay Signature in
Supernova Remnants. Science, 339:807, 2013.
[72] T. Kamae et al. Parameterization of Gamma, e+/- and Neutrino Spectra
Produced by p-p Interaction in Astronomical Environment. Astrophys. J.,
647:692–708, 2006. [Erratum: Astrophys. J.662,779(2007)].
[73] L. Maraschi, G. Ghisellini, and A. Gelotti. A jet model for the gamma-ray
emitting blazar 3C 279. Astrophysical Journal Letters, 1992.
[74] T. Stanev. High Energy Cosmic Rays. Springer Praxis Books. Springer Berlin
Heidelberg, 2010.
[75] J.W. Hewitt, F. Yusef-Zadeh, and M. Wardle. Correlation of Supernova
Remnant Masers and Gamma-Ray Sources. The Astrophysical Journal Letters,
706(2):L270, 2009.
[76] P.L. Nolan et al. Fermi Large Area Telescope Second Source Catalog.
Astrophys. J. Suppl., 199:31, 2012.
[77] M. G. Aartsen et al. A combined maximum-likelihood analysis of the
high-energy astrophysical neutrino flux measured with IceCube. Astrophys. J.,
809(1):98, 2015.
[78] M. G. Aartsen et al. Observation of High-Energy Astrophysical Neutrinos in
Three Years of IceCube Data. Phys. Rev. Lett., 113:101101, 2014.
[79] M. G. Aartsen et al. Evidence for Astrophysical Muon Neutrinos from the
Northern Sky with IceCube. Phys. Rev. Lett., 115(8):081102, 2015.
[80] M. Ahlers and F. Halzen. Pinpointing Extragalactic Neutrino Sources in Light
of Recent IceCube Observations. Phys. Rev., D90(4):043005, 2014.
[81] C. G. S. Costa and C. Salles. Prompt atmospheric neutrinos: Phenomenology
and implications. In Phenomenology 2001 Symposium (PHENO 2001)
Madison, Wisconsin, May 7-9, 2001, 2001.
[82] A. Bhattacharya et al. Perturbative charm production and the prompt
atmospheric neutrino flux in light of RHIC and LHC. JHEP, 06:110, 2015.
[83] S.P. Reynolds. Supernova remnants at high energy. Ann. Rev. Astron.
Astrophys., 2008.
[84] T.K. Gaisser, F. Halzen, and T. Stanev. Particle astrophysics with high-energy
neutrinos. Phys. Rept., 258:173–236, 1995. [Erratum: Phys.
239
Rept.271,355(1996)].
[85] P.M. Bauleo and J.R. Martino. The dawn of the particle astronomy era in
ultra-high-energy cosmic rays. Nature, 458N7240:847–851, 2009.
[86] Z. Fodor, S.D. Katz, and A. Ringwald. Determination of absolute neutrino
masses from Z bursts. Phys.Rev.Lett., 88:171101, 2002.
[87] P. W. Gorham et al. The Antarctic Impulsive Transient Antenna Ultra-high
Energy Neutrino Detector Design, Performance, and Sensitivity for 2006-2007
Balloon Flight. Astropart. Phys., 32:10–41, 2009.
[88] P. W. Gorham et al. New Limits on the Ultra-high Energy Cosmic Neutrino
Flux from the ANITA Experiment. Phys. Rev. Lett., 103:051103, 2009.
[89] J.K. Becker. High-energy neutrinos in the context of multimessenger physics.
Phys. Rept., 458:173–246, 2008.
[90] D. Prialnik. An Introduction to the Theory of Stellar Structure and Evolution.
Cambridge University Press, 2009.
[91] E. Waxman and A. Loeb. TeV Neutrinos and GeV Photons from Shock
Breakout in Supernovae. Phys. Rev. Lett., 87:071101, Jul 2001.
[92] C. Grupen. Astroparticle Physics. Springer, 2005.
[93] P.A. Caraveo. Gamma-ray Pulsar Revolution. Ann. Rev. Astron. Astrophys.,
52:211–250, 2014.
[94] F.A. Aharonian, L.A. Anchordoqui, D. Khangulyan, and T. Montaruli.
Microquasar LS 5039: A TeV gamma-ray emitter and a potential TeV neutrino
source. J. Phys. Conf. Ser., 39:408–415, 2006.
[95] W. Bednarek. TeV neutrinos from microquasars in compact massive binaries.
Astrophys. J., 631:466, 2005.
[96] F. Aharonian et al. Discovery of very-high-energy γ-rays from the Galactic
Centre ridge. Nature, 439(7077):695–698, 02 2006.
[97] H.J. Völk, F.A. Aharonian, and D. Breitschwerdt. The nonthermal energy
content and gamma ray emission of starburst galaxies and clusters of galaxies.
Space Science Reviews, 75(1-2):279–297, 1996.
[98] V. Beckmann and C. Shrader. Active Galactic Nuclei. Wiley, 2012.
[99] E. Waxman. Gamma-ray bursts: Potential sources of ultra high energy
cosmic-rays. Nucl. Phys. Proc. Suppl., 151:46–53, 2006.
[100] R. Abbasi et al. An absence of neutrinos associated with cosmic-ray
acceleration in γ-ray bursts. Nature, 484:351–353, 2012.
[101] J.A. Formaggio and G.P. Zeller. From eV to EeV: Neutrino Cross Sections
Across Energy Scales. Rev.Mod.Phys., 84:1307, 2012.
[102] A. Gazizov and M.P. Kowalski. ANIS: High energy neutrino generator for
neutrino telescopes. Comput.Phys.Commun., 172:203–213, 2005.
240
[103] Raj Gandhi, Chris Quigg, Mary Hall Reno, and Ina Sarcevic. Neutrino
interactions at ultrahigh-energies. Phys. Rev., D58:093009, 1998.
[104] H.L. Lai et al. Global QCD analysis of parton structure of the nucleon:
CTEQ5 parton distributions. Eur.Phys.J., C12:375–392, 2000.
[105] John D. Jackson. Classical Electrodynamics. Wiley, 3 edition, 1998.
[106] Jakob van Santen. Neutrino Interactions in IceCube above 1 TeV: Constraints
on Atmospheric Charmed-Meson Production and Investigation of the
Astrophysical Neutrino Flux with 2 Years of IceCube Data taken 2010–2012.
PhD thesis, U. Wisconsin, Madison, 2014-11-18.
[107] A. Achterberg et al. First Year Performance of The IceCube Neutrino
Telescope. Astropart.Phys., 26:155–173, 2006.
[108] P.B. Price and K. Woschnagg. Role of group and phase velocity in high-energy
neutrino observatories. Astropart.Phys., 15:97–100, 2001.
[109] D. E. Groom, N. V. Mokhov, and S. I. Striganov. Muon Stopping Power and
Range Tables 10 MeV-100 TeV. Atomic Data and Nuclear Data Tables,
78(2):183 – 356, 2001.
[110] D. Chirkin and W. Rhode. Propagating leptons through matter with Muon
Monte Carlo (MMC). arXiv.org, 2004. arXiv:hep-ph/0407075.
[111] J.H. Koehne et al. PROPOSAL: A tool for propagation of charged leptons.
Computer Physics Communications, 184(9):2070 – 2090, 2013.
[112] S.R. Klein and A. Connolly. Neutrino Absorption in the Earth, Neutrino
Cross-Sections, and New Physics. In Community Summer Study 2013:
Snowmass on the Mississippi (CSS2013) Minneapolis, MN, USA, July
29-August 6, 2013, 2013.
[113] M. G. Aartsen et al. Search for Time-independent Neutrino Emission from
Astrophysical Sources with 3 yr of IceCube Data. Astrophys. J., 779(2):132,
2013.
[114] Institute for Nuclear Research of the Russian Academy of Sciences. The first
cluster of Baikal-GVD, a deep underwater neutrino telescope on the cubic
kilometer scale, has started operation in Lake Baikal. Press release, 2015.
[115] M. G. Aartsen et al. First observation of PeV-energy neutrinos with IceCube.
Phys. Rev. Lett., 111:021103, 2013.
[116] R. Abbasi et al. The Design and Performance of IceCube DeepCore.
Astropart.Phys., 35:615–624, 2012.
[117] F. Halzen and S.R. Klein. IceCube: An Instrument for Neutrino Astronomy.
Rev. Sci. Instrum., 81:081101, 2010.
[118] R. Abbasi et al. The IceCube Data Acquisition System: Signal Capture,
Digitization, and Timestamping. Nucl.Instrum.Meth., A601:294–316, 2009.
241
[119] R. Abbasi et al. Calibration and Characterization of the IceCube
Photomultiplier Tube. Nucl.Instrum.Meth., A618:139–152, 2010.
[120] M.G. Aartsen et al. Measurement of South Pole ice transparency with the
IceCube LED calibration system. Nucl.Instrum.Meth., A711:73–89, 2013.
[121] M. Ackermann, J. Ahrens, X. Bai, M. Bartelt, S.W. Barwick, et al. Optical
properties of deep glacial ice at the South Pole. J.Geophys.Res.,
111(D13):D13203, 2006.
[122] M.G. Aartsen et al. South Pole glacial climate reconstruction from
multi-borehole laser particulate stratigraphy. Journal of glaciology,
218(59):1117–1129, 2013.
[123] D. Heck, J. N. Capdevielle, G. Schatz, and T. Thouw. CORSIKA: A Monte
Carlo Code to Simulate Extensive Air Showers. Report FZKA 6019,
Forschungszentrum Karlsruhe, 1998.
[124] R. S. Fletcher, T. K. Gaisser, Paolo Lipari, and Todor Stanev. SIBYLL: An
Event generator for simulation of high-energy cosmic ray cascades. Phys.
Rev., D50:5710–5731, 1994.
[125] F. Schmidt. Corsika shower images.
www.ast.leeds.ac.uk/~fs/showerimages.html, 2015.
[126] J.R. Hoerandel. On the knee in the energy spectrum of cosmic rays.
Astropart.Phys., 19:193–220, 2003.
[127] A. M. Dziewonski and D. L. Anderson. Preliminary reference earth model.
Phys. Earth Planet. Interiors, 25:297–356, 1981.
[128] C. Andreopoulos et al. The GENIE Neutrino Monte Carlo Generator. Nucl.
Instrum. Meth., A614:87–104, 2010.
[129] M. Honda et al. Calculation of atmospheric neutrino flux using the interaction
model calibrated with atmospheric muon data. Phys.Rev., D75:043006, 2007.
[130] W. Lohmann, R. Kopp, and R. Voss. Energy Loss of Muons in the Energy
Range 1-GeV to 10000-GeV. CERN-85-03, CERN-YELLOW-85-03, 1985.
[131] D. Chirkin. Photon tracking with GPUs in IceCube. Nuclear Instruments and
Methods in Physics Research Section A: Accelerators, Spectrometers,
Detectors and Associated Equipment, 725(0):141 – 143, 2013.
[132] J. Lundberg et al. Light tracking for glaciers and oceans: Scattering and
absorption in heterogeneous media with Photonics. Nucl. Instrum. Meth.,
A581:619–631, 2007.
[133] G.W. Clark. Arrival Directions of Cosmic-Ray Air Showers from the Northern
Sky. Phys.Rev., 108:450–457, 1957.
[134] M. G. Aartsen et al. Observation of the cosmic-ray shadow of the Moon with
IceCube. Phys. Rev., D89(10):102004, 2014.
242
[135] M. G. Aartsen et al. Improvement in Fast Particle Track Reconstruction with
Robust Statistics. Nucl. Instrum. Meth., A736:143–149, 2014.
[136] J. Ahrens et al. Muon track reconstruction and data selection techniques in
AMANDA. Nucl. Instrum. Meth., A524:169–194, 2004.
[137] F. James and M. Roos. Minuit: A System for Function Minimization and
Analysis of the Parameter Errors and Correlations. Comput. Phys. Commun.,
10:343–367, 1975.
[138] D. Pandel. Bestimmung von Wasser- und Detektorparametern und
Rekonstruktion von Myonen bis 100 TeV mit dem Baikal-Neutrinoteleskop
NT-72. Master’s thesis, Humboldt-Universität zu Berlin, 1996.
[139] N. van Eijndhoven, O. Fadiran, and G. Japaridze. Implementation of a Gauss
convoluted Pandel {PDF} for track reconstruction in neutrino telescopes.
Astroparticle Physics, 28(4–5):456 – 462, 2007.
[140] T. Neunhöffer. Estimating the angular resolution of tracks in neutrino
telescopes based on a likelihood analysis. Astropart.Phys., 25:220, 2006.
[141] J-P. Hülß. Search for neutrinos from the direction of the Galactic Center with
the IceCube neutrino telescope. PhD thesis, Wuppertal University, Germany,
2010.
[142] S. Euler. Observation of oscillations of atmospheric neutrinos with the
IceCube Neutrino Observatory. PhD thesis, RWTH Aachen University, 2014.
[143] M.G. Aartsen et al. Energy Reconstruction Methods in the IceCube Neutrino
Telescope. JINST, 9:P03009, 2014.
[144] M.G. Aartsen et al. Medium-energy (few TeV - 100 TeV) neutrino point
source searches in the Southern sky with IceCube. In PoS Proceedings of the
34th International Cosmic Ray Conference, contribution 1056, 2015.
[145] J. Braun et al. Methods for point source analysis in high energy neutrino
telescopes. Astropart.Phys., 29:299, 2008.
[146] P. Giovanni. Comments on likelihood fits with variable resolution. ECONF
C030908:WELT002,2003, 2004.
[147] S. S. Wilks. The Large-Sample Distribution of the Likelihood Ratio for
Testing Composite Hypotheses. Annals Math. Statist., 9(1):60–62, 1938.
[148] A. Wald. Tests of statistical hypotheses concerning several parameters when
the number of observations is large. Trans.Am.Math.Soc., 54:426–482, 1943.
[149] M. G. Aartsen et al. Searches for Extended and Point-like Neutrino Sources
with Four Years of IceCube Data. Astrophys. J., 796(2):109, 2014.
[150] J. Feintzeig. Searches for Point-like Sources of Astrophysical Neutrinos with
the IceCube Neutrino Observatory. PhD thesis, University of Wisconsin,
Madison, 2014.
243
[151] S. Adrian-Martinez et al. Search for Cosmic Neutrino Point Sources with Four
Year Data of the ANTARES Telescope. Astrophys. J., 760:53, 2012.
[152] A. Hoecker et al. TMVA - Toolkit for Multivariate Data Analysis. PoS,
ACAT:040, 2007.
[153] S. Schönert, T. Gaisser, E. Resconi, and O. Schulz. Vetoing atmospheric
neutrinos in a high energy neutrino telescope. Phys. Rev. D, 79:043009, Feb
2009.
[154] T. Gaisser, K. Jero, A. Karle, and J. van Santen. Generalized self-veto
probability for atmospheric neutrinos. Phys. Rev., D90(2):023009, 2014.
[155] K. M. Górski et al. Healpix: A framework for high-resolution discretization
and fast analysis of data distributed on the sphere. Astrophys. J., 622(2):759,
2005.
[156] S.P. Wakely and D. Horan. TeVCat: An online catalog for Very High Energy
Gamma-Ray Astronomy. In Proceedings, 30th International Cosmic Ray
Conference (ICRC 2007), volume 3, page 1341, 2007.
[157] J. Neyman. Outline of a Theory of Statistical Estimation Based on the
Classical Theory of Probability. Royal Society of London Philosophical
Transactions Series A, 236:333, 1937.
[158] A. Cooper-Sarkar, P. Mertsch, and S. Sarkar. The high energy neutrino
cross-section in the Standard Model and its uncertainty. JHEP, 08:042, 2011.
[159] R. Abbasi et al. Time-integrated searches for point-like sources of neutrinos
with the 40-string icecube detector. Astrophys. J., 732(1):18, 2011.
[160] M. G. Aartsen et al. Searches for Time-dependent Neutrino Sources with
IceCube Data from 2008 to 2012. Astrophys. J., 807(1):46, 2015.
[161] M. G. Aartsen et al. Letter of Intent: The Precision IceCube Next Generation
Upgrade (PINGU). arXiv.org, 2014. arXiv:1401.2046 [physics.ins-det].
[162] P. Bagley et al. KM3NeT Technical Design Report for a Deep-Sea Research
Infrastructure Incorporating a Very Large Volume Neutrino Telescope.
http://km3net.org/TDR/TDRKM3NeT.pdf, 2011.
244
ICRC 2015 C ONTRIBUTION
Author’s contribution to the Astroparticle Physics Conference (ICRC) in 2015.
The proceedings are pending publication in Proceedings of Science2 .
2 Minor
edits: A factor of 2 is missing after the second and third equal sign in equation 4.2. The
values in the eight and ninth column in table 1, should switch place, see table 10.5 in this thesis.
The binning in the effective area plot on the right hand side of figure 1 is not correct which leads
to an underestimation of the effective area. This is purely a visual effect and does not affect the
results. The corrected effective area is shown in figure 10.28 in this thesis. The vertical axis of
the left plot in figure 2 should read Φ0 , see figure 10.30 in this thesis.
Low-energy (100 GeV - few TeV) neutrino point
source searches in the southern sky with IceCube
The IceCube Collaboration†
†
http://icecube.wisc.edu/collaboration/authors/icrc15_icecube
E-mail: [email protected]
IceCube searches for neutrino point sources in the southern sky have traditionally been restricted
to energies well above 100 TeV, where the background of down-going atmospheric muons becomes sufficiently low to be tolerated in searches. Recent developments of a data stream dedicated to the study of low-energy neutrinos from the Southern Hemisphere enable searches to be
extended far below this threshold. This data stream utilizes powerful veto techniques to reduce
the atmospheric muon background, allowing IceCube for the first time to perform a search for
point-like sources of neutrinos in the southern sky at energies between 100 GeV and a few TeV.
We will present the event selection along with the results obtained using one year of data from
the full detector.
Corresponding authors: R. Ström1∗,
1 Department of Physics and Astronomy, Uppsala University, Box 516, 751 20 Uppsala, Sweden
The 34th International Cosmic Ray Conference,
30 July- 6 August, 2015
The Hague, The Netherlands
∗ Speaker.
c Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence.
http://pos.sissa.it/
Low-energy neutrino point source searches with IceCube
R. Ström1
1. Introduction
Neutrinos interact through gravity and the weak interaction only and can therefore, unlike other
messenger particles like photons and protons, travel astronomical distances without experiencing
significant absorption or deflection in magnetic fields. Further, they constitute a unique probe for
distances larger than 50 Mpc and from extremely dense environments. This makes neutrinos ideal
as cosmic messengers that can be used to explore the most powerful accelerators in the Universe,
in particular the mechanisms for producing and accelerating cosmic rays to incredible energies.
By studying clustering of neutrino candidate events in the IceCube Neutrino Observatory (IceCube) we can discover sites of hadronic acceleration where neutrinos are produced in the decay
of charged pions created when primary cosmic ray protons and nuclei interact with radiation and
matter in the vicinity of the acceleration site.
The Southern Hemisphere is particularly interesting, containing both the Galactic Center and
the main part of the Galactic plane. For IceCube this means we are studying down-going tracks
in a much larger background of muons created as cosmic rays enter the Earth’s atmosphere. The
traditional workaround has been to study only the highest energy neutrinos well above 100 TeV,
where the background of atmospheric muons is sufficiently low. In the LESE (Low Energy Starting
Events) analysis presented in this contribution we have lowered the energy threshold significantly
utilizing several veto techniques, i.e. by looking for starting events in the detector. In particular we
study neutrinos from soft spectra with cut offs in TeV-scale as observed for Galactic γ-ray sources,
see e.g. [1, 2, 3], and references therein.
2. Data and Simulation
IceCube instruments a cubic-kilometer of ice and is the world’s largest neutrino telescope,
located under the ice cap of the geographic South Pole, Antarctica [4]. It consists of 5160 optical
modules deployed on 86 cables, called strings, and arranged in a three-dimensional grid between
1450 and 2450 m beneath the surface. The ultra pure ice at such depths is ideal for observations of
Cherenkov light from charged leptons created when neutrinos interact with molecules in the ice or
in the nearby bedrock.
Each optical module operates as an individual detector, a so-called DOM (Digital Optical
Module), and consists of a photomultiplier tube, digitizer electronics and calibration LEDs, all
housed in a pressure-resistant glass sphere. When two neighboring or next-to-neighboring DOMs
on the same string both cross the 0.25 photoelectron discriminator threshold of the PMT within a
1 μs time window, the signals qualify to be in Hard Local Coincidence (HLC). The leading trigger
condition is formed when at least eight DOMs with HLCs trigger within a 5 μs time window. In
this analysis we also use events from a trigger dedicated to catching almost vertical events as well
as one with a lower effective energy threshold active in a denser central subarray called DeepCore
[5]. The total trigger rate considered is around 2.8 kHz. The data is reduced via a series of straight
cuts dedicated to reconstruction and calorimetric quality and a cut on an event score calculated
using a machine learning algorithm.
We present the analysis and results for data taken between May 2011 and May 2012, the first
year of the completed 86-string IceCube detector. Runs were carefully selected to ensure that the
R. Ström1
Low-energy neutrino point source searches with IceCube
101
12
100
10
10−1
Effective Area (m2)
mHz per bin
14
8
6
4
Data/MC ratio
2
0
1.2
1.0
0.8
-1.00
10−2
10−3
10−4
10
-0.50
0.00
0.50
1.00
BDT Score
−90◦
−90◦
−60◦
−30◦
−5
10−6
2.0
2.5
3.0
3.5
4.0
≤δ
≤δ
≤δ
≤δ
< 0◦
< −60◦
< −30◦
< 0◦
4.5
5.0
log10(Eν ) (GeV)
Figure 1: Left: BDT score distribution where experimental data is shown in black dots with error bars
showing the statistical uncertainty (too small to be seen on this scale) and is further illustrated by the gray
shaded area. Simulated atmospheric muons are presented in the solid red line. Signal hypotheses are shown
−2 −Eν /10 TeV (dashed), E−2 e−Eν /1 TeV
in blue lines normalized to experimental data: E−2
ν (dash-dotted), Eν e
ν
−3
(solid) and Eν (dotted). Simulated signals are defined as starting inside the instrumented detector volume.
Right: Neutrino effective area as a function of neutrino energy.
majority of the DOMs used in vetoes were active, by both requiring a certain fraction of active
DOMs overall and individually in the two different veto regions applied at the filter level, but also
by studying the event rates at higher analysis levels. The final livetime considered is 329 days. The
true right ascension directions of the data were kept blind until the last step of the analysis to keep
the event selection and the final likelihood implementation as unbiased as possible.
3. Selecting Low-Energy Starting Events
The event selection is optimized for neutrinos in the energy region between 100 GeV to a few
TeV. The outermost parts of the instrumented detector are used as a veto towards incoming muons.
The triggered data is reduced by both real-time filtering at the South Pole, as well as subsequent
offline CPU-intensive processing.
The development of a real-time full sky filter with a low energy threshold was crucial to reach
lower energies, since many of the traditional IceCube real-time filters for the Southern Hemisphere
were effectively using a charge cut, focusing on keeping events from 100 TeV and up. The filter
consists of two steps: a hit pattern based part, that rejects events with recorded hits in any of the
5 top DOMs or with the first HLC hit on any of the outermost strings, followed by a rejection of
events with a reconstructed interaction vertex outside of the fiducial volume1 . About 170 Hz pass
the filter and is sent North for offline processing.
To ensure that we have enough information in the events for advanced angular and energy
reconstructions we begin by considering overall quality cuts, such as a minimum required number
of photoelectrons. We further reject events with a poor reconstruction quality. Several advanced
vetoes are used to reduce the number of muons leaking in. In particular we are studying causality
1 A polygon shaped region defined by connecting lines that go through points at 90% of the distance from the center
string (string 36) to the outermost strings at the position of the 8 corners of the detector (as seen from the top) and the
region below the 5 topmost DOMs.
R. Ström1
Low-energy neutrino point source searches with IceCube
10−6
E−2e−E/ 1 TeV
E−3
E−2e−E/ 10 TeV
E−2
IceCube Preliminary
10−8
10−9
10−10
−1.0
10
−0.6
−0.4
−0.2
sin δ
0.0
STeVE (3y∗)
−7
MESE (3y)
Through–going (3y∗)
*scaled from 1y
10−8
10−9
Livetime: 329 d
−0.8
LESE (3y∗)
IceCube Preliminary
E2dN/dE [TeVcm−2s−1]
Eγ dN/dE [TeV(γ−1)cm−2s−1]
10−7
10−10 2
10
Dec. = −60.0◦
103
104
105
106
107
108
109
Eν (GeV)
Figure 2: Left: Median sensitivities as a function of declination (line) and source upper limits (markers)
at 90% C.L. Right: Differential median sensitivity corresponding to 3 years of operation (at -60◦ and 90%
C.L.). Created by simulating point sources with an E−2 spectrum over quarter-decades in energy.
of hits in an extended veto region, consisting both of the outer regions of IceCube and the IceTop
surface array [6].
As a major step in the event selection we apply a machine learning algorithm: a Boosted
Decision Tree (BDT) is programmed to exploit the remaining separation between signal and background. In particular, it focuses on directional and calorimetric information. The BDT event score
is shown in figure 1. We make a cut at 0.40 which was found to be optimal or close to optimal for
a wide range of signal hypotheses.
The angular reconstruction uncertainty σ is estimated individually for each event as σ 2 =
2
(σ1 + σ22 )/2 where σ1,2 are the major/minor axes of an ellipse that constitute the conic section of
a paraboloid fitted to the likelihood space around the reconstruction minimum [7]. The ellipse is
located at a level corresponding to 1σ indicating 68% containment. This quantity often underestimates the angular uncertainty, which is why we apply an energy dependent correction to the pull
as seen in simulation, defined as the ratio of the true reconstruction error to the estimated reconstruction error, based on the median angular resolution. Events with a corrected value above 5◦
were rejected after optimization studies of the final sensitivity. The median angular resolution of
the signal sample is about 2◦ .
The energy loss pattern for each event is estimated using an algorithm fitting hypothetical
cascade-like energy depositions along the input track reconstruction [8]. An energy proxy for
the neutrino energy is constructed by adding the contributions that are reconstructed inside the
instrumented volume.
The final sample consists of 6191 events, a rate of about 0.2 mHz, and is still dominated by
atmospheric muons leaking in through the veto layers, effectively mimicking starting events. The
effective area for the full southern sky as well as in three declination bands is shown in figure 1.
4. Search for Clustering of Events
We look for point sources by searching for spacial clustering of events in the Southern Hemisphere. Hypotheses are tested using an unbinned maximum likelihood method [9] where the events
Low-energy neutrino point source searches with IceCube
R. Ström1
in the final sample contribute individually with their reconstructed position xi , reconstructed energy
Ei and an estimation of the angular uncertainty σi . The likelihood L is a function of two fit parameters: ns (the number of signal events) and γ (spectral index of a source S located at position
xS with an unbroken power law spectrum) and contains the probability for an event to be signalor background-like. The final likelihood function is a sum over N events and is maximized with
respect to ns and γ where ns is constrained to be non-negative:
n
ns s
L(ns , γ) = ∏
(4.1)
Si (x, xS ; σ , E, γ) + 1 −
Bi (δ ; E) .
N
N
i
The signal PDF S contains a spacial part parametrized with a two-dimensional gaussian profile
with extension σi and since Icecube, due to its location, has a uniform acceptance in right ascension,
the spacial part of the background PDF B is represented with a spline fit to the full sample of
experimental data depending on the declination δ only. The calorimetric parts of these PDFs consist
of two-dimensional spline fits to the reconstructed energy Ei and declination δi .
The test statistic TS used is the likelihood ratio Λ taken with respect to the null hypothesis
(ns = 0):
n s Si
TS = 2 log Λ = ∑ log 1 +
Wi − 1 ≡ ∑ log(1 + ns χi ),
(4.2)
N Bi
i
i
where χi is the individual event weight evaluated for a point source hypothesis at a particular
location.
An all sky search is performed looking for an excess of neutrino candidate events in the Southern Hemisphere (declination range -85◦ to 0◦ ) on a HEALPix2 [10] grid corresponding to a resolution of about 0.5◦ . In a second iteration we zoom in on the most interesting regions. This search is
not motivated by any prior knowledge regarding the position of the sources, instead we are limited
by the large number of trials performed, which together with the angular resolution gives a limit on
the effective number of individual points tested. The pre-trial p-values obtained are converted into
post-trial p-values by counting the fraction of times we get an equal or greater pre-trial p-value from
an ensemble of 10000 pseudo-experiments where the right ascension directions were scrambled,
see figure 3.
We also perform a search at the position of 96 pre-defined sources: all 84 TeVCat3 [11] sources
in the Southern Hemisphere (as of May 2015) and in addition 12 sources traditionally tested by
IceCube (’GX 339-4’, ’Cir X-1’, ’PKS 0426-380’, ’PKS 0537-441’, ’QSO 2022-077’, ’PKS 1406-076’, ’PKS 072711’, ’QSO 1730-130’, ’PKS 0454-234’, ’PKS 1622-297’, ’PKS 1454-354’, ’ESO 139-G12’). The TeVCat listing
consists of sources seen by ground-based gamma-ray experiments, such as VERITAS, MAGIC,
and H.E.S.S. and from these we consider sources from the catalogs ’Default Catalog’ and ’Newly
Announced’, stable sources published or pending publication in refereed journals.
Figure 2 shows the median sensitivity of the LESE analysis for four different signal hypotheses
as a function of declination. Further we illustrate the joint effort within the IceCube collaboration
towards lowering the energy threshold for point source searches by showing the differential sensitivity for four different analyses: the LESE analysis, the STeVE (Starting TeV Events) analysis
optimized at slightly higher energies (few TeV - 100 TeV) [12], and the classic through-going point
2 http://healpix.sourceforge.net
3 http://tevcat.uchicago.edu
R. Ström1
Low-energy neutrino point source searches with IceCube
0.07
0.08
IceCube Preliminary
0.06
Post − trial p − value : 0.881
0.05
0.04
0.03
0.02
1σ
2σ
3σ
0.01
0.00
2.0
3.0
Post − trial p − value : 0.148
0.05
0.04
0.03
0.02
Observed p − value
Scrambled trials
4.0
IceCube Preliminary
0.07
Fraction of trials
Fraction of trials
0.06
5.0
6.0
7.0
0.00
0.0
8.0
Observed p − value
Scrambled trials
1σ
2σ
3σ
0.01
1.0
− log10(p-value)
2.0
3.0
4.0
5.0
6.0
7.0
− log10(p-value)
Figure 3: Pre-trial p-value distribution from 10000 scrambled trials for the sky scan (left) and the source
list scan (right). The red line indicates the observed best p-values for the hottest spot and hottest source
respectively, while the black vertical lines represent the 1, 2, and 3σ significance levels.
source analysis in black [13]. The sensitivities shown correspond to 3 years of operation and where
estimated by simulating events with the livetime corresponding to the MESE (Medium-Energy
Starting Events) analysis focusing on energies in the region 100 TeV - 1 PeV [14].
5. Results
The pre-trial p-value distribution for the all sky search is shown in figure 4. The hottest spot
is located at right ascension 305.2◦ and declination -8.5◦ with a pre-trial p-value of 1.6 · 10−4 and
a post-trial p-value of 88.1% (best fit parameters n̂s = 18.3 and γ̂ = 3.5). In figure 5 we show a
zoom in on the TS distribution in the area surrounding the hottest spot and further a zoom in on the
individual event weights log (1 + ns χi ) that contribute to the test statistic at the position and with
the best fit parameters of the hottest spot.
The most significant source in the pre-defined list was QSO 2022-0774 , a Flat-Spectrum Radio
Quasar (FSRQ), with a pre-trial p-value of 2.5 · 10−3 . The post-trial p-value is 14.8%. For the
sources in the list, we calculate the upper limit at 90% confidence level (C.L.) based on the classical
−2 −Eν /10 TeV , E−2 e−Eν /1 TeV , and E−3 ,
approach [15] for the signal hypotheses: dΦ/dEν ∝ E−2
ν , Eν e
ν
ν
Table 1: The most significant sources in the pre-defined list. The p-values are the pre-trial probability
of being compatible with the background-only hypothesis. The n̂s and γ̂ are the best-fit values from the
likelihood optimization. We further show the flux upper limit Φ at 90% C.L. for a number of neutrino flux
models. These are shown for the combined flux of νμ and ν̄μ in units TeVγ−1 cm−2 s−1 .
Source
QSO 2022-077
HESS J1718-385
HESS J1841-055
KUV 00311-1938
30 Dor C
4 Alternate
R.A. [◦ ]
dec. [◦ ]
− log10 (p-val.)
n̂s
γ̂
306.5
259.5
280.3
8.4
83.9
-7.6
-38.5
-5.53
-19.4
-69.1
2.61
1.58
1.48
1.40
1.04
17.3
5.9
5.0
7.1
2.3
3.5
3.4
2.4
3.4
3.6
name: 2EG J2023-0836
E−2
ν
6.74 · 10−10
7.62 · 10−10
5.01 · 10−10
8.25 · 10−10
9.14 · 10−10
γ−1
Φ90%
cm−2 s−1
νμ +ν̄μ × TeV
−Eν /10 TeV
−2 e−Eν /1 TeV
E−2
e
E
ν
ν
5.81 · 10−9
1.56 · 10−9
3.21 · 10−8
2.80 · 10−9
4.52 · 10−9
1.19 · 10−9
1.33 · 10−8
2.47 · 10−9
5.35 · 10−8
3.65 · 10−9
E−3
ν
1.27 · 10−9
5.58 · 10−9
9.88 · 10−10
2.65 · 10−9
7.92 · 10−9
Low-energy neutrino point source searches with IceCube
R. Ström1
Figure 4: Pre-trial significance skymap in equatorial coordinates (J2000) showing the LESE analysis result
on data from the first year of the full 86-string configuration of IceCube. The location of the Galactic plane
is illustrated with a black solid line and the position of the most significant fluctuation is indicated with a
black circle. The hottest spot is located at R.A. 305.2◦ and dec. -8.5◦ with a pre-trial p-value of 1.6 · 10−4 .
The corresponding post-trial p-value is 88.1%.
Figure 5: Zoom on the hottest spot (red circle + cross) and hottest source (red circle) in equatorial coordinates R.A. and dec. The reconstructed position of each event is shown in black dots. Left: The color
transparency indicates the test statistic value at each pixel in the sky. Right: The radii of the circles are equal
to the estimated angular uncertainty of each event. Color transparency indicate the strength of the individual likelihood weights log (1 + ns χi ) that contribute to the test statistic at the position and with the best fit
parameters of the hottest spot.
Low-energy neutrino point source searches with IceCube
R. Ström1
see figure 2. The limits of the five most significant sources are presented in table 1 along with the
best fit parameters ns and γ. Note that we do not consider under fluctuations, i.e. for observed values
of TS below the median of TS for the background-only hypothesis we report the corresponding
median upper limit.
6. Summary and Outlook
No evidence of neutrino emission from point-like sources in the Southern Hemisphere at lowenergies (100 GeV - few TeV) was found in the first year of data from the 86-string configuration of
IceCube. The most significant spot in the sky has a post-trial p-value of 88.1%. Further we tested
the presence of signal from 96 sources in a pre-defined source list of known γ-ray emitters seen
in ground-based telescopes. The results are consistent with the background-only hypothesis. The
most significant source has a post-trial p-value of 14.8%. Upper limits at 90% C.L. were calculated
for these sources for a number of signal hypotheses, see figure 2 and table 1.
Three more years of IceCube data with the full configuration exist and are ready to be incorporated into this analysis. We expect the sensitivity to improve slightly faster than the square-root
of time limit, due to the relatively low background rate. Further, we have plans of performing a
search for extended point sources and for neutrino emission in the Galactic plane.
References
[1] W. Bednarek, G. F. Burgio, and T. Montaruli, New Astron.Rev. 49 (2005) 1.
[2] A. Kappes, J. Hinton, C. Stegmann, and F. A. Aharonian, Astrophys.J. 656 (2007) 870.
[3] M. D. Kistler and J. F. Beacom, Phys.Rev. D74 (2006) 063007.
[4] IceCube Collaboration, A. Achterberg et al., Astropart.Phys. 26 (2006) 155.
[5] IceCube Collaboration, R. Abbasi et al., Astropart.Phys. 35 (2012) 615.
[6] IceCube Collaboration, R. Abbasi et al., Nucl.Instrum.Meth. A700 (2013) 188.
[7] T. Neunhoffer, Astropart.Phys. 25 (2006) 220.
[8] IceCube Collaboration, M. Aartsen et al., JINST 9 (2014) P03009.
[9] J. Braun, J. Dumm, F. De Palma, C. Finley, A. Karle, et al., Astropart.Phys. 29 (2008) 299.
[10] K. M. Górski, E. Hivon, A. J. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and
M. Bartelmann, The Astrophysical Journal 622 (2005), no. 2 759.
[11] S. P. Wakely and D. Horan, TeVCat: An online catalog for Very High Energy Gamma-Ray Astronomy,
in Proceedings, 30th International Cosmic Ray Conference (ICRC 2007), vol. 3, p. 1341, 2007.
[12] IceCube Collaboration, M. Aartsen et al., Medium-energy (few TeV - 100 TeV) neutrino point source
searches in the Southern sky with IceCube, in PoS(ICRC2015)1056 these proceedings, 2015.
[13] IceCube Collaboration, M. Aartsen et al., Searches for Extended and Point-like Neutrino Sources
with Four Years of IceCube Data, 2014. arXiv:1406.6757.
[14] J. Feintzeig, Searches for Point-like Sources of Astrophysical Neutrinos with the IceCube Neutrino
Observatory. PhD thesis, University of Wisconsin, Madison, 2014.
[15] J. Neyman, Royal Society of London Philosophical Transactions Series A 236 (1937) 333.
Acta Universitatis Upsaliensis
Uppsala Dissertations from the Faculty of Science
Editor: The Dean of the Faculty of Science
1–11: 1970–1975
12. Lars Thofelt: Studies on leaf temperature recorded by direct measurement and
by thermography. 1975.
13. Monica Henricsson: Nutritional studies on Chara globularis Thuill., Chara zeylanica Willd., and Chara haitensis Turpin. 1976.
14. Göran Kloow: Studies on Regenerated Cellulose by the Fluorescence Depolarization Technique. 1976.
15. Carl-Magnus Backman: A High Pressure Study of the Photolytic Decomposition of Azoethane and Propionyl Peroxide. 1976.
16. Lennart Källströmer: The significance of biotin and certain monosaccharides
for the growth of Aspergillus niger on rhamnose medium at elevated temperature. 1977.
17. Staffan Renlund: Identification of Oxytocin and Vasopressin in the Bovine Adenohypophysis. 1978.
18. Bengt Finnström: Effects of pH, Ionic Strength and Light Intensity on the Flash
Photolysis of L-tryptophan. 1978.
19. Thomas C. Amu: Diffusion in Dilute Solutions: An Experimental Study with
Special Reference to the Effect of Size and Shape of Solute and Solvent Molecules. 1978.
20. Lars Tegnér: A Flash Photolysis Study of the Thermal Cis-Trans Isomerization
of Some Aromatic Schiff Bases in Solution. 1979.
21. Stig Tormod: A High-Speed Stopped Flow Laser Light Scattering Apparatus and
its Application in a Study of Conformational Changes in Bovine Serum Albumin. 1985.
22. Björn Varnestig: Coulomb Excitation of Rotational Nuclei. 1987.
23. Frans Lettenström: A study of nuclear effects in deep inelastic muon scattering.
1988.
24. Göran Ericsson: Production of Heavy Hypernuclei in Antiproton Annihilation.
Study of their decay in the fission channel. 1988.
25. Fang Peng: The Geopotential: Modelling Techniques and Physical Implications
with Case Studies in the South and East China Sea and Fennoscandia. 1989.
26. Md. Anowar Hossain: Seismic Refraction Studies in the Baltic Shield along the
Fennolora Profile. 1989.
27. Lars Erik Svensson: Coulomb Excitation of Vibrational Nuclei. 1989.
28. Bengt Carlsson: Digital differentiating filters and model based fault detection.
1989.
29. Alexander Edgar Kavka: Coulomb Excitation. Analytical Methods and Experimental Results on even Selenium Nuclei. 1989.
30. Christopher Juhlin: Seismic Attenuation, Shear Wave Anisotropy and Some
Aspects of Fracturing in the Crystalline Rock of the Siljan Ring Area, Central
Sweden. 1990.
31. Torbjörn Wigren: Recursive Identification Based on the Nonlinear Wiener Model.
1990.
32. Kjell Janson: Experimental investigations of the proton and deuteron structure
functions. 1991.
33. Suzanne W. Harris: Positive Muons in Crystalline and Amorphous Solids. 1991.
34. Jan Blomgren: Experimental Studies of Giant Resonances in Medium-Weight
Spherical Nuclei. 1991.
35. Jonas Lindgren: Waveform Inversion of Seismic Reflection Data through Local
Optimisation Methods. 1992.
36. Liqi Fang: Dynamic Light Scattering from Polymer Gels and Semidilute Solutions.
1992.
37. Raymond Munier: Segmentation, Fragmentation and Jostling of the Baltic Shield
with Time. 1993.
Prior to January 1994, the series was called Uppsala Dissertations from the Faculty of
Science.
Acta Universitatis Upsaliensis
Uppsala Dissertations from the Faculty of Science and Technology
Editor: The Dean of the Faculty of Science
1–14: 1994–1997. 15–21: 1998–1999. 22–35: 2000–2001. 36–51: 2002–2003.
52. Erik Larsson: Identification of Stochastic Continuous-time Systems. Algorithms,
Irregular Sampling and Cramér-Rao Bounds. 2004.
53. Per Åhgren: On System Identification and Acoustic Echo Cancellation. 2004.
54. Felix Wehrmann: On Modelling Nonlinear Variation in Discrete Appearances of
Objects. 2004.
55. Peter S. Hammerstein: Stochastic Resonance and Noise-Assisted Signal Transfer.
On Coupling-Effects of Stochastic Resonators and Spectral Optimization of Fluctuations in Random Network Switches. 2004.
56. Esteban Damián Avendaño Soto: Electrochromism in Nickel-based Oxides. Coloration Mechanisms and Optimization of Sputter-deposited Thin Films. 2004.
57. Jenny Öhman Persson: The Obvious & The Essential. Interpreting Software Development & Organizational Change. 2004.
58. Chariklia Rouki: Experimental Studies of the Synthesis and the Survival Probability of Transactinides. 2004.
59. Emad Abd-Elrady: Nonlinear Approaches to Periodic Signal Modeling. 2005.
60. Marcus Nilsson: Regular Model Checking. 2005.
61. Pritha Mahata: Model Checking Parameterized Timed Systems. 2005.
62. Anders Berglund: Learning computer systems in a distributed project course: The
what, why, how and where. 2005.
63. Barbara Piechocinska: Physics from Wholeness. Dynamical Totality as a Conceptual Foundation for Physical Theories. 2005.
64. Pär Samuelsson: Control of Nitrogen Removal in Activated Sludge Processes.
2005.
65. Mats Ekman: Modeling and Control of Bilinear Systems. Application to the Activated Sludge Process. 2005.
66. Milena Ivanova: Scalable Scientific Stream Query Processing. 2005.
67. Zoran Radovic´: Software Techniques for Distributed Shared Memory. 2005.
68. Richard Abrahamsson: Estimation Problems in Array Signal Processing, System
Identification, and Radar Imagery. 2006.
69. Fredrik Robelius: Giant Oil Fields – The Highway to Oil. Giant Oil Fields and their
Importance for Future Oil Production. 2007.
70. Anna Davour: Search for low mass WIMPs with the AMANDA neutrino telescope.
2007.
71. Magnus Ågren: Set Constraints for Local Search. 2007.
72. Ahmed Rezine: Parameterized Systems: Generalizing and Simplifying Automatic
Verification. 2008.
73. Linda Brus: Nonlinear Identification and Control with Solar Energy Applications.
2008.
74. Peter Nauclér: Estimation and Control of Resonant Systems with Stochastic Disturbances. 2008.
75. Johan Petrini: Querying RDF Schema Views of Relational Databases. 2008.
76. Noomene Ben Henda: Infinite-state Stochastic and Parameterized Systems. 2008.
77. Samson Keleta: Double Pion Production in dd→αππ Reaction. 2008.
78. Mei Hong: Analysis of Some Methods for Identifying Dynamic Errors-invariables
Systems. 2008.
79. Robin Strand: Distance Functions and Image Processing on Point-Lattices With
Focus on the 3D Face-and Body-centered Cubic Grids. 2008.
80. Ruslan Fomkin: Optimization and Execution of Complex Scientific Queries. 2009.
81. John Airey: Science, Language and Literacy. Case Studies of Learning in Swedish
University Physics. 2009.
82. Arvid Pohl: Search for Subrelativistic Particles with the AMANDA Neutrino Telescope. 2009.
83. Anna Danielsson: Doing Physics – Doing Gender. An Exploration of Physics Students’ Identity Constitution in the Context of Laboratory Work. 2009.
84. Karin Schönning: Meson Production in pd Collisions. 2009.
85. Henrik Petrén: η Meson Production in Proton-Proton Collisions at Excess Energies
of 40 and 72 MeV. 2009.
86. Jan Henry Nyström: Analysing Fault Tolerance for ERLANG Applications. 2009.
87. John Håkansson: Design and Verification of Component Based Real-Time Systems. 2009.
88. Sophie Grape: Studies of PWO Crystals and Simulations of the pp
¯ → Λ̄Λ, Λ̄Σ0 Reactions for the PANDA Experiment. 2009.
90. Agnes Rensfelt. Viscoelastic Materials. Identification and Experiment Design. 2010.
91. Erik Gudmundson. Signal Processing for Spectroscopic Applications. 2010.
92. Björn Halvarsson. Interaction Analysis in Multivariable Control Systems. Applications to Bioreactors for Nitrogen Removal. 2010.
93. Jesper Bengtson. Formalising process calculi. 2010. 94. Magnus Johansson. Psi-calculi: a Framework for Mobile Process Calculi. Cook
your own correct process calculus – just add data and logic. 2010.
95. Karin Rathsman. Modeling of Electron Cooling. Theory, Data and Applications.
2010.
96. Liselott Dominicus van den Bussche. Getting the Picture of University Physics.
2010.
97. Olle Engdegård. A Search for Dark Matter in the Sun with AMANDA and IceCube.
2011.
98. Matthias Hudl. Magnetic materials with tunable thermal, electrical, and dynamic
properties. An experimental study of magnetocaloric, multiferroic, and spin-glass
materials. 2012.