ecCNO solar neutrinos: a challenge for gigantic ultra-pure liquid scintillator detectors

ecCNO solar neutrinos: a challenge for gigantic ultra-pure liquid
scintillator detectors
arXiv:1410.2796v1 [hep-ph] 10 Oct 2014
F.L. Villante1,2
1
Universit`
a dell’Aquila, Dipartimento di Scienze Fisiche e Chimiche, L’Aquila, Italy
2
INFN, Laboratori Nazionali del Gran Sasso, Assergi (AQ), Italy
Abstract
Neutrinos produced in the Sun by electron capture reactions on 13 N, 15 O and 17 F, to which we refer
as ecCNO neutrinos, are not usually considered in solar neutrino analysis since the expected fluxes are
extremely low. The experimental determination of this sub-dominant component of the solar neutrino flux
is very difficult but could be rewarding since it provides a determination of the metallic content of the solar
core and, moreover, probes the solar neutrino survival probability in the transition region at Eν ∼ 2.5 MeV.
In this letter, we suggest that this difficult measure could be at reach for future gigantic ultra-pure liquid
scintillator detectors, such as LENA.
1
arXiv:1410.2624v1 [physics.atom-ph] 9 Oct 2014
Spectroscopy of Ba and Ba+ deposits in solid xenon for barium tagging in
nEXO
B. Mong,1, 2 S. Cook,1, ∗ T. Walton,1 C. Chambers,1 A. Craycraft,1 C. Benitez-Medina,1, †
K. Hall,1, ‡ W. Fairbank Jr.,1, § J.B. Albert,3 D.J. Auty,4 P.S. Barbeau,5 V. Basque,6 D. Beck,7
M. Breidenbach,8 T. Brunner,9 G.F. Cao,10 B. Cleveland,2, ¶ M. Coon,7 T. Daniels,8 S.J. Daugherty,3
R. DeVoe,9 T. Didberidze,4 J. Dilling,11 M.J. Dolinski,12 M. Dunford,6 L. Fabris,13 J. Farine,2
W. Feldmeier,14 P. Fierlinger,14 D. Fudenberg,9 G. Giroux,15, ∗∗ R. Gornea,15 K. Graham,6
G. Gratta,9 M. Heffner,16 M. Hughes,4 X.S. Jiang,10 T.N. Johnson,3 S. Johnston,17 A. Karelin,18
L.J. Kaufman,3 R. Killick,6 T. Koffas,6 S. Kravitz,9 R. Kr¨
ucken,11 A. Kuchenkov,18 K.S. Kumar,19
20
6
12
7
D.S. Leonard, C. Licciardi, Y.H. Lin, J. Ling, R. MacLellan,21 M.G. Marino,14 D. Moore,9
A. Odian,8 I. Ostrovskiy,9 A. Piepke,4 A. Pocar,17 F. Retiere,11 P.C. Rowson,8 M.P. Rozo,6 A. Schubert,9
D. Sinclair,11, 6 E. Smith,12 V. Stekhanov,18 M. Tarka,7 T. Tolba,15 K. Twelker,9 J.-L. Vuilleumier,15
J. Walton,7 M. Weber,9 L.J. Wen,10 U. Wichoski,2 L. Yang,7 Y.-R. Yen,12 and Y.B. Zhao10
1
Physics Department, Colorado State University, Fort Collins CO, USA
2
Department of Physics, Laurentian University, Sudbury ON, Canada
3
Physics Department and CEEM, Indiana University, Bloomington IN, USA
4
Department of Physics and Astronomy, University of Alabama, Tuscaloosa AL, USA
5
Department of Physics, Duke University, and Triangle Universities
Nuclear Laboratory (TUNL), Durham North Carolina, USA
6
Physics Department, Carleton University, Ottawa ON, Canada
7
Physics Department, University of Illinois, Urbana-Champaign IL, USA
8
SLAC National Accelerator Laboratory, Stanford CA, USA
9
Physics Department, Stanford University, Stanford CA, USA
10
Institute of High Energy Physics, Beijing, China
11
TRIUMF, Vancouver BC, Canada
12
Department of Physics, Drexel University, Philadelphia PA, USA
13
Oak Ridge National Laboratory, Oak Ridge TN, USA
14
Technische Universitat Munchen, Physikdepartment and Excellence Cluster Universe, Garching, Germany
15
LHEP, Albert Einstein Center, University of Bern, Bern, Switzerland
16
Lawrence Livermore National Laboratory, Livermore CA, USA
17
Physics Department, University of Massachusetts, Amherst MA, USA
18
Institute for Theoretical and Experimental Physics, Moscow, Russia
19
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook NY,USA
20
Department of Physics, University of Seoul, Seoul, Korea
21
Department of Physics, University of South Dakota, Vermillion SD, USA
(Dated: October 13, 2014)
Progress on a method of barium tagging for the nEXO double beta decay experiment is reported.
Absorption and emission spectra for deposits of barium atoms and ions in solid xenon matrices are
presented. Excitation spectra for prominent emission lines, temperature dependence and bleaching
of the fluorescence reveal the existence of different matrix sites. A regular series of sharp lines
observed in Ba+ deposits is identified with some type of barium hydride molecule. Lower limits
for the fluorescence quantum efficiency of the principal Ba emission transition are reported. Under
current conditions, an image of ≤ 104 Ba atoms can be obtained. Prospects for imaging single Ba
atoms in solid xenon are discussed.
I.
∗
†
‡
§
¶
∗∗
Now at NIST, Boulder CO, USA
Now at Intel, Hillsboro OR, USA
Now at Raytheon, Denver CO, USA
Corresponding author
Also SNOLAB, Sudbury ON, Canada
Now at Queen’s University, Kingston ON, Canada
INTRODUCTION
The spectroscopy of atoms and molecules isolated
in solid matrices of inert gases dates back sixty years
[1]. Matrix isolation spectroscopy, as this method is
known, has established that atomic states in noble
gas matrices retain many of the fundamental proper-
Design and Development of a Nanoscale Multi Probe System Using
Open Source SPM Controller and GXSM Software: A Tool of
Nanotechnology.
S. K. Suresh Babu a, J. S. DevrenjithSingh c, D. Jackuline Moni b, D. Devaprakasam a*
a
c
Faculty, NEMS/MEMS/NANOLITHOGRAPHY Lab, Department of Nanosciences and Technology, Karunya University,
Coimbatore-641114, India
b
Faculty, Department of Electronics and Communication Engineering, Karunya University, Coimbatore-641114, India
PG Scholar, NEMS/MEMS/NANOLITHOGRAPHY Lab, Department of Nanosciences and Technology, Karunya University,
Coimbatore-641114, India
Abstract:
We report our design, development, installation and troubleshooting of an open source Gnome X Scanning Microscopy (GXSM)
software package for controlling and processing of modern Scanning Probe Microscopy (SPM) system as a development tool of
Nanotechnology. GXSM is a full featured analysis tool for the characterization of nanomaterials with different controlling tools
like Atomic Force Microscopy (AFM), Scanning Tunneling Spectroscopy (STS), scanning tunneling microscopy (STM),
Nanoindentation and etc.,. This developed package tool consists of Digital Signal Processing (DSP) and image processing system
of SPM. A digital signal processor (DSP) subsystem runs the feedback loop, generates the scanning signals and acquires the data
during SPM measurements. With installed SR-Hwl plug-in this developed package was tested in no hardware mode.
Keywords: GXSM; STS; DSP; SPM; SR-Hwl
1.
Introduction
Gnome X Scanning Microscopy (GXSM) is a best and powerful tool for data acquisition and controlling of scanning
probe microscopy. This tool is used with scanning tunneling microscopy (STM), atomic force microscopy (AFM),
Scanning Tunneling Spectroscopy (STS). GXSM provides various methods for 2D data (of various types: byte,
short, long, double) visualization and manipulation. We are currently using it for scanning tunneling microscopy
(STM), atomic force microscopy (AFM) [1] [2] [3]. Data presentation is by default a (grey or false color) image but
it can be switched to a profile view (1d), profile extraction on the fly... Or you can use a 3D shaded view (using
MesaGL) which now offers a sophisticated scene setup. The ”high-level” scan controller is now separated from the
GXSM core and is built as Plug-in, while the real-time ”low-level” scanning process, data-acquisition and feedback
loop (if needed), runs on the DSP if present, else a dummy image is produced. [4]
Extremely flexible configuration of user settings and data acquisition and probe modes. Special instrument control
Plug-Ins. A Plug-in categorizing mechanism automatically only load the required Plug-Ins for the actual setup: E.g.
no Hardware Control Plug-ins is loaded in”offline” Data Analysis Mode. There are more than 80 Plug-ins used.
GXSM itself is fully hardware independent. It provides a generic hardware-interface (HwI) plugin infrastructure to
attach any kind of hardware. The HwI has to manage low level tasks and pass the data to the GXSM core, plus, it
has to provide the necessary GUI to provide user access and control to hardware/instrument specific parameters and
tasks.
The GXSM software can be divided into three parts: First, the GXSM core providing the main functionality for
handling and visualization of data. The basic functions of the GXSM core can be extended using plug-ins. Plug-ins
are small pieces of software dynamically linked to the core.
*Tel: 0422-2641300, Fax: 0422-2615615, Email: [email protected]
The plug-ins is described in the second part of the manual. The third part documents the digital signal processing
Published in Journal of Magnetism and Magnetic Materials, Vol. 322 (22), 3664-3671, 2010
DOI: 10.1016/j.jmmm.2010.07.022
Comparison of adjustable permanent magnetic field
sources
arXiv:1410.2681v1 [physics.ins-det] 10 Oct 2014
R. Bjørk, C. R. H. Bahl, A. Smith and N. Pryds
Abstract
A permanent magnet assembly in which the flux density can be altered by a mechanical operation is often
significantly smaller than comparable electromagnets and also requires no electrical power to operate. In this
paper five permanent magnet designs in which the magnetic flux density can be altered are analyzed using
numerical simulations, and compared based on the generated magnetic flux density in a sample volume and
the amount of magnet material used. The designs are the concentric Halbach cylinder, the two half Halbach
cylinders, the two linear Halbach arrays and the four and six rod mangle. The concentric Halbach cylinder design
is found to be the best performing design, i.e. the design that provides the most magnetic flux density using the
least amount of magnet material. A concentric Halbach cylinder has been constructed and the magnetic flux
density, the homogeneity and the direction of the magnetic field are measured and compared with numerical
simulation and a good agreement is found.
Department of Energy Conversion and Storage, Technical University of Denmark - DTU, Frederiksborgvej 399, DK-4000 Roskilde, Denmark
*Corresponding author: [email protected]
1. Introduction
A homogeneous magnetic field for which the flux density
can be controlled is typically produced by an electromagnet.
To generate a magnetic flux density of 1.0 T over a reasonably sized gap an electromagnet requires a large amount of
power, typically more than a thousand watts, and additionally
a chiller is needed to keep the electromagnet from overheating.
This makes any application using such an electromagnet very
power consuming.
Instead of using an electromagnet a permanent magnet
configuration for which the flux density can be controlled by a
mechanical operation can be used. A number of such variable
permanent magnetic flux sources have previously been investigated separately [1; 2], and presented in a brief overview [3]
but no detailed investigations determining the relative efficiencies of the different designs have been published. Here five
such designs are compared and the best performing design
is found. The efficiency of some of the magnet designs discussed in this paper have also been analyzed elsewhere [4; 5].
However, there only the efficiency of designs of infinite length
is characterized. In this paper we consider designs of finite
length, which is important as the flux density generated by a
finite length magnet assembly is significantly reduced compared to designs of infinite length. Also we parameterize the
optimal designs, allowing other researchers to build efficient
magnet assemblies.
Examples of applications where an adjustable permanent
magnet assembly can be used are nuclear magnetic resonance
(NMR) apparatus [6], magnetic cooling devices [7] and particle accelerators [8]. The flux density source designed in this
paper is dimensioned for a new differential scanning calorime-
ter (DSC) operating under magnetic field designed and built at
Risø DTU [9], but the general results apply for any application
in which a variable magnetic field source is needed.
2. Variable magnetic field sources
2.1 Design requirements
In the analysis of a variable magnetic field source some design constrains must be imposed, such as the minimum and
maximum producible flux density. In this analysis the maximum flux density is chosen to be 1.5 T which is a useful flux
density for a range of experiments. The minimum flux density
is required to be less than 0.1 T both to allow measurements
at low values of the magnetic flux density, as well as to allow
placement of a sample with only small interaction with the
magnetic field. Also a flux density of less than 0.1 T is more
easily realizable in actual magnet assemblies than if exactly 0
T had been required. Ideally the flux density must be homogeneous across the sample at any value between the high and
low values. The mechanical force needed to adjust the flux
density is also considered.
The magnet assembly must be able to contain a sample
that can be exposed to the magnetic field, and the sample
must of course be able to be moved in and out of the magnet
assembly. The size of a sample can be chosen arbitrarily, and
for this investigation a sample volume shaped as a cylinder
with a radius of 10 mm and a length of 10 mm was chosen.
To allow the sample to be moved we require that the clearance
between the magnet and the sample must be at least 2.5 mm,
in effect increasing the gap radius to 12.5 mm. The sample
volume is sufficiently large to allow the magnet designs to be
Published in IEEE Transactions on Magnetics, Vol. 47 (6), 1687-1692, 2011
DOI: 10.1109/TMAG.2011.2114360
Improving magnet designs with high and low field
regions
arXiv:1410.2679v1 [physics.ins-det] 10 Oct 2014
R. Bjørk, C. R. H. Bahl, A. Smith and N. Pryds
Abstract
A general scheme for increasing the difference in magnetic flux density between a high and a low magnetic field
region by removing unnecessary magnet material is presented. This is important in, e.g., magnetic refrigeration
where magnet arrays has to deliver high field regions in close proximity to low field regions. Also, a general way
to replace magnet material with a high permeability soft magnetic material where appropriate is discussed. As
an example these schemes are applied to a two dimensional concentric Halbach cylinder design resulting in a
reduction of the amount of magnet material used by 42% while increasing the difference in flux density between
a high and a low field region by 45%.
Department of Energy Conversion and Storage, Technical University of Denmark - DTU, Frederiksborgvej 399, DK-4000 Roskilde, Denmark
*Corresponding author: [email protected]
1. Introduction
Designing a permanent magnet structure that contains regions
of both high and low magnetic field and ensuring a high flux
difference between these can be challenging. Such magnets
can be used for a number of purposes but here we will consider
magnetic refrigeration as an example. Magnetic refrigeration
is a potentially highly energy efficient and environmentally
friendly cooling technology, based on the magnetocaloric effect. This effect manifests itself as a temperature change that
so-called magnetocaloric materials exhibit when subjected to
a changing magnetic field. In magnetic refrigeration a magnetocaloric material is moved in and out of a magnetic field, in
order to generate cooling. The magnetic field is usually generated by permanent magnets [1; 2]. In such magnet designs
used in magnetic refrigeration it is very important to obtain a
large difference in flux density between the high and the low
flux density regions, between which the magnetocaloric material is moved in order to generate the magnetocaloric effect.
This is because the magnetocaloric effect scales with the magnetic field to the power of 0.7 near the Curie temperature for
most magnetocaloric materials of interest, and in particular for
the benchmark magnetocaloric material Gd [3; 4]. Because
of this scaling it is very important that the magnetic field in
a low field region is very close to zero. This is especially a
problem in rotary magnetic refrigerators [2; 5; 6; 7] where
the high and low magnetic field regions are constrained to be
close together. Here it is crucial to ensure that flux does not
leak from the high field region into the low field region.
The permanent magnet structure can be designed from the
ground up to accommodate this criterion, e.g., by designing
the structure through Monte Carlo optimization [8], or by
optimizing the direction of magnetization of the individual
magnets in the design [9; 10]. However, the resulting design
may be unsuitable for construction. Here we present a scheme
that applied to a given magnet design will lower the flux
density in the low flux density region, thus increasing the
difference in flux density, and lower the amount of magnet
material used at the same time. No general way to improve
the flux density difference for a magnet design has previously
been presented.
2. Physics of the scheme
The properties of field lines of the magnetic flux density can be
exploited to minimize the magnetic flux in a given region. A
field line is a curve whose tangent at every point is parallel to
the vector field at that point. These lines can be constructed for
any vector field. The magnitude of the magnetic flux density,
B, is proportional to the density of field lines. For a two
dimensional problem, as will be considered here, with a static
magnetic field, lines of constant magnetic vector potential,
Az , are identical to field lines of B if the Lorenz gauge, i.e.
∇ · A = 0, is chosen [11]. We begin by calculating a field line
of the magnetic flux density, B, i.e. an equipotential line of
constant Az , that encloses the area in which the flux density
is to be minimized. All field lines enclosed by the calculated
field line are confined to the enclosed area as field lines do not
cross. These enclosed field lines are creating the flux density
inside the calculated area. This procedure will only work for
a two dimensional case, as in three dimensions a field line
will not enclose a volume. Here a surface of field lines that
enclose the volume must be used instead.
If we remove all magnet material enclosed within the
chosen field line, it might be thought that no field lines should
be present inside the area and the flux density should be zero.
However, this is not precisely the case as by removing some
magnet material the magnetostatic problem is no longer the
same, and a new solution, with new field lines of B, must
Fabrication and characterization of a
lithium-glass-based composite neutron detector
arXiv:1410.2658v1 [physics.ins-det] 10 Oct 2014
G.C. Richa,c,d , K. Kazkaza , H.P. Martineza , T. Gushuea,b,1
a Lawrence
Livermore National Laboratory, Livermore, CA 94550, United States
of Physics and Astronomy, San Francisco State University, San Francisco,
CA 94132, United States
c Department of Physics and Astronomy, University of North Carolina at Chapel Hill,
Chapel Hill, NC 27599, United States
d Triangle Universities Nuclear Laboratory, Durham, NC 27708, United States
b Department
Abstract
A novel composite, scintillating material intended for neutron detection and
composed of small (1.5 mm) cubes of KG2-type lithium glass embedded in a
matrix of scintillating plastic has been developed in the form of a 2.2”-diameter,
3.1”-tall cylindrical prototype loaded with (5.82 ± 0.02) % lithium glass by mass.
The response of the material when exposed to 252 Cf fission neutrons and various
γ-ray sources has been studied; using the charge-integration method for pulse
shape discrimination, good separation between neutron and γ-ray events is observed and intrinsic efficiencies of (5.88 ± 0.78) × 10−3 and (7.80 ± 0.77) × 10−5
for 252 Cf fission neutrons and 60 Co γ rays are obtained; an upper limit for the
sensitivity to 137 Cs γ rays is determined to be < 3.70 × 10−8 . The neutron/γ
discrimination capabilities are improved in circumstances when a neutron capture signal in the lithium glass can be detected in coincidence with a preceding elastic scattering event in the plastic scintillator; with this coincidence requirement, the intrinsic efficiency of the prototype detector for 60 Co γ rays is
(3.3 ± 2.8) × 10−7 while its intrinsic efficiency for unmoderated 252 Cf fission
neutrons is (2.0 ± 0.3) × 10−3 . Through use of subregion-integration ratios in
addition to the coincidence requirement, the efficiency for γ rays from 60 Co
is reduced to an upper limit of < 4.11 × 10−8 while the 252 Cf fission neutron
efficiency becomes (1.63 ± 0.22) × 10−3 .
1. The 3 He supply problem
The well-documented shortage of 3 He [1] has motivated numerous investigations into novel neutron detector technologies which can suitably replace 3 He
Email addresses: [email protected] (G.C. Rich), [email protected] (K. Kazkaz)
address: Twitter, Inc., San Francisco, CA 94103, United States
1 Present
Preprint submitted to Elsevier
October 13, 2014
Model of single-electron performance of micropixel
avalanche photodiodes
a
a
a
a
b,*
Z. Sadygov , Kh. Abdullaev , G. Akhmedov , F. Akhmedov , S. Khorev ,
a
a
a
a
b
a
R. Mukhtarov , A. Sadigov , A. Sidelev , A. Titov , F. Zerrouk , and V. Zhezher
a
Joint Institute for Nuclear Research,
Dubna, Moscow Region, Russia
b
Zecotek Photonics, Inc.
Richmond, BC, Canada
E-mail: [email protected]
ABSTRACT: An approximate iterative model of avalanche process in a pixel of micropixel
avalanche photodiode initiated by a single photoelectron is presented. The model describes
development of the avalanche process in time, taking into account change of electric field
within the depleted region caused by internal discharge and external recharge currents.
Conclusions obtained as a result of modelling are compared with experimental data. Simulations
show that typical durations of the front and rear edges of the discharge current have the same
magnitude of less than 50 ps. The front of the external recharge current has the same duration,
however duration of the rear edge depends on value of the quenching micro-resistor. It was
found that effective capacitance of the pixel calculated as the slope of linear dependence of the
pulse charge on bias voltage exceeds its real capacitance by a factor of two.
KEYWORDS: Photon detectors for UV, visible and IR photons (solid-state) (PIN diodes, APDs,
Si-PMTs, CCDs, EBCCDs, etc.); Detector modelling and simulations (electric fields, charge
transport, multiplication and induction, pulse formation, electron emission, etc.)
*
Corresponding author.
arXiv:1410.2825v1 [hep-ph] 10 Oct 2014
Effective approach to top-quark decay and FCNC
processes at NLO accuracy
Cen Zhang
Centre for Cosmology, Particle Physics and Phenomenology, Universit´e catholique de Louvain,
2 Chemin du Cyclotron, B-1348 Louvain-la-Neuve, Belgium
E-mail: [email protected]
Abstract. The top quark is expected to be a probe to new physics beyond the standard
model. Thanks to the large number of top quarks produced at the Tevatron and the LHC,
various properties of the top quark can now be measured accurately. An effective field theory
allows us to study the new physics effects in a model-independent way, and to this end accurate
theoretical predictions are required. In this talk we will discuss some recent results on top-quark
decay processes as well as flavor-changing processes, based on the effective field theory approach.
1. Introduction
Current strategies to search for new physics beyond the standard model (SM) can be broadly
divided into two categories. In the first category we look for new resonant states. In the second
category, new states are assumed to be heavy, and we look for their indirect effects in the
interactions of known particles.
The top quark has been a natural probe to new physics in the first category, due to its
large mass and strong coupling to the electroweak symmetry breaking sector. Searches for
resonant states through decay processes involving the top quarks have been performed both at
the Tevatron and at the LHC. Examples include tt¯ resonance searches, top partner production,
and so on. Unfortunately, until now no new states have been discovered, and exclusion limits
have been placed, up to around several TeV scale.
On the other hand, the top quark physics has entered a precision era, thanks to the large
number of top quarks produced at the Tevatron and LHC. Various properties of the top quark
have been already measured with high precision, and the upcoming LHC Run-II will continue
to increase the precision level. From the theory side, accurate SM predictions are also available.
As a result the focus of top quark physics is now moving toward the second category, i.e. to
measure accurately the known interactions and the rare processes of SM particles. Examples are
measurements on W -helicity fractions in top-quark decay, and searches for processes involving
flavor-changing neutral current (FCNC) of the top quark. These are the main topics of this talk.
2. The effective approach
When looking for deviations from the SM interactions, the standard approach is to utilize the
effective field theory (EFT) framework, in which deviations from the SM are parameterized by
including higher dimensional operators. The approach is valid when the new physics scale, Λ,
is higher than the scale of the process. Assuming the full SM gauge symmetries, the leading
Non-resonant Higgs pair production in the bbbb final state at the LHC
David Wardrope, Eric Jansen, Nikos Konstantinidis, Ben Cooper, Rebecca Falla, Nurfikri Norjoharuddeen
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, United Kingdom
arXiv:1410.2794v1 [hep-ph] 10 Oct 2014
Abstract
We present a particle-level√study of the Standard Model non-resonant Higgs-pair production process in the bbbb final state, at the
Large Hadron Collider at s = 14 TeV. Each Higgs boson is reconstructed from a pair of close-by jets formed with the anti-kt jet
clustering algorithm, with radius parameter R = 0.4. Given the kinematic properties of the produced Higgs bosons, we show that
this reconstruction approach is more suitable than the use of two large-radius jets that capture all the decay products of a Higgs
boson, as was previously proposed in the literature. We also demonstrate that the sensitivity for observing this final state can be
improved substantially when the full set of uncorrelated angular and kinematic variables of the 4b system is combined, leading to
a statistical significance of ∼2σ per experiment with an integrated luminosity of 3 ab−1 .
Keywords: LHC, HL-LHC, Higgs-pair production, Higgs self-coupling
1. Introduction
The thorough investigation of the properties of the Higgs boson discovered by ATLAS and CMS [1, 2] is one of the highest
priorities in particle physics for the next two decades. A crucial
property is the trilinear Higgs self-coupling which can be measured by the observation of Higgs-pair production. At the Large
Hadron Collider (LHC), this is considered to be one of the most
challenging processes to observe, even with a data set corresponding to an integrated luminosity of 3 ab−1 , the target for the
proposed High Luminosity LHC (HL-LHC) programme. Several particle-level studies have been published since the Higgs
discovery, assessing the sensitivity of different decay channels
such as bbγγ [3, 4], bbττ [5] and bbWW [6]. The bbbb final
state was examined in Ref. [7], where it was found to have very
low sensitivity, and more recently in Ref. [8] where the use of a
tighter kinematic selection and jet substructure techniques appeared to give some improved sensitivity, although that study
considered only the 4b multijet process as background.
In this letter, we extend our previous work on resonant Higgspair production in the bbbb final state [9]—which inspired the
recent ATLAS analysis [10]—to the non-resonant case, considering all the relevant background processes, namely bbbb,
bbcc, and tt. The bbbb final state benefits from the high branching fraction of Higgs decaying to bb (57.8% in the Standard
Model (SM) for mH = 125.5 GeV, leading to about one third
of the Higgs pairs decaying to bbbb), but suffers from large
backgrounds. However, like in the previously studied resonant
case [9], the transverse momentum (pT ) of the Higgs bosons in
the non-resonant process in the SM is relatively high, with the
most probable value around 150 GeV [8]. By tailoring the event
selection to focus on this high-pT regime, where the two Higgs
bosons are essentially back-to-back, one has the benefits outlined in Ref. [9] for the resonant case. Requiring four b-tagged
jets, paired into two high-pT dijet systems, is a very powerful
Preprint submitted to Physics Letters B
way to reduce the backgrounds. This is particularly true for the
dominant multijet production, which has a cross section that
falls rapidly with increasing jet and dijet pT . There is also negligible ambiguity in pairing the four b-jets to correctly reconstruct the Higgs decays. Finally, due to the high boost, the four
jets will have high enough transverse momenta for such events
to be selected with high efficiency at the first level trigger of
ATLAS and CMS, with efficient high level triggering possible
through online b-tagging [10]. We note that triggering will be a
major challenge at the HL-LHC, but the substantial detector and
trigger upgrade programmes proposed by the two experiments,
should make it possible to maintain the high trigger efficiencies reported by ATLAS in the 8 TeV run [10] for channels that
are essential for key measurements at the HL-LHC, such as the
Higgs trilinear self-coupling.
2. Simulation of signal and background processes
Signal and background processes are modelled using simulated Monte Carlo (MC) event samples. The HH → bbbb
signal events are generated with MadGraph [11] 1.5.12, interfaced to Pythia [12] 8.175 for parton showering (PS) and
hadronisation, and using the CTEQ6L1 [13] leading-order (LO)
parton-density functions (PDF). The signal is scaled to a crosssection of 11.6 fb [14]. The tt events are simulated using
Powheg [15, 16] interfaced to Pythia 8.185. Only hadronic tt
events are considered in this study (including hadronic τ decays), as the semileptonic (dileptonic) decays are suppressed
by a lower branching fraction and the need for one (two) additional b-tagged jet(s) to pass the event selection. The bbbb
and bbcc backgrounds are generated by Sherpa [17] 2.1.1, using the CT10 [18] PDF set. These event samples are scaled to
their next-to-leading order (NLO) cross-section by applying a
k-factor of 1.5 [19]. For all the above background processes,
October 13, 2014
UMD-PP-014-011
Prepared for submission to JHEP
arXiv:1410.0362v1 [hep-ph] 1 Oct 2014
Identifying boosted new physics with non-isolated
leptons
Christopher Brust,a,b,c Petar Maksimovic,a Alice Sady,a Prashant Saraswat,a,b
Matthew T. Waltersa,d and Yongjie Xina
a
Department of Physics and Astronomy, Johns Hopkins University,
Charles Street, Baltimore, MD 21218, U.S.A.
b
Department of Physics, University of Maryland,
Campus Drive, College Park, MD 20742, U.S.A.
c
Perimeter Institute for Theoretical Physics,
Caroline Street N, Waterloo, Ontario, N2L 2Y5, Canada
d
Department of Physics, Boston University,
Commonwealth Avenue, Boston, MA 02215, U.S.A.
E-mail: [email protected], [email protected],
[email protected], [email protected], [email protected],
[email protected]
Abstract: We demonstrate the utility of leptons which fail standard isolation criteria in
searches for new physics at the LHC. Such leptons can arise in any event containing a
highly boosted particle which decays to both leptons and quarks. We begin by considering
multiple extensions to the Standard Model which primarily lead to events with non-isolated
leptons and are therefore missed by current search strategies. We emphasize the failure of
standard isolation variables to adequately discriminate between signal and SM background
for any value of the isolation cuts. We then introduce a new approach which makes use of
jet substructure techniques to distinguish a broad range of signals from QCD events. We
proceed with a simulated, proof-of-principle search for R-parity violating supersymmetry
to demonstrate both the experimental reach possible with the use of non-isolated leptons
and the utility of new substructure variables over existing techniques.
Keywords: Beyond Standard Model, Supersymmetry Phenomenology
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
CERN-PH-EP/2014-239
2014/10/13
CMS-HIG-13-025
arXiv:1410.2751v1 [hep-ex] 10 Oct 2014
Searches for heavy scalar and pseudoscalar Higgs bosons
and for flavor-violating decays of the√
top quark into a
Higgs boson in pp collisions at s = 8 TeV
The CMS Collaboration∗
Abstract
Searches are presented for heavy scalar (H) and pseudoscalar (A) Higgs bosons
posited in the two doublet model (2HDM) extensions of the standard model (SM).
These searches are based on a data sample of pp collisions
collected with the CMS
√
experiment at the LHC at a center-of-mass energy of s = 8 TeV and corresponding
to an integrated luminosity of 19.5 fb−1 . The decays H → hh and A → Zh, where h
denotes an SM-like Higgs boson, lead to events with three or more isolated charged
leptons or with a photon pair accompanied by one or more isolated leptons. The
search results are presented in terms of the H and A production cross sections times
branching fractions and are further interpreted in terms of 2HDM parameters. We
place 95% CL cross section upper limits of approximately 7 pb on σB for H → hh
and 2 pb for A → Zh. Also presented are the results of a search for the rare decay
of the top quark that results in a charm quark and an SM Higgs boson, t → ch, the
existence of which would indicate a nonzero flavor-changing Yukawa coupling of the
top quark to the Higgs boson. We place a 95% CL upper limit of 0.56% on B(t → ch).
Submitted to Physical Review D
c 2014 CERN for the benefit of the CMS Collaboration. CC-BY-3.0 license
∗ See
Appendix A for the list of collaboration members
The effective theory of fluids at NLO and
implications for dark energy
Guillermo Ballesteros
arXiv:1410.2793v1 [hep-th] 10 Oct 2014
Institut f¨
ur Theoretische Physik, Universit¨
at Heidelberg.
Philosophenweg 16, D-69120 Heidelberg, Germany.
[email protected]
Abstract
We present the effective theory of fluids at next-to-leading order in derivatives, including an
operator that has not been considered until now. The power-counting scheme and its connection
with the propagation of phonon and metric fluctuations are emphasized. In a perturbed FLRW
geometry the theory presents a set of features that make it very rich for modelling the acceleration
of the Universe. These include anisotropic stress, a non-adiabatic speed of sound and modifications to the standard equations of vector and tensor modes. These effects are determined by an
energy scale which controls the size of the high derivative terms and ensures that no instabilities
appear.
Directional detection of dark matter streams
Ciaran A. J. O’Hare∗ and Anne M. Green†
arXiv:1410.2749v1 [astro-ph.CO] 10 Oct 2014
School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, NG7 2RD, UK
(Dated: October 13, 2014)
Directional detection of WIMPs, in which the energies and directions of the recoiling nuclei are
measured, currently presents the only prospect for probing the local velocity distribution of Galactic
dark matter. We investigate the extent to which future directional detectors would be capable of
probing dark matter substructure in the form of streams. We analyse the signal expected from a
Sagittarius-like stream and also explore the full parameter space of stream speed, direction, dispersion and density. Using a combination of non-parametric directional statistics, a profile likelihood
ratio test and Bayesian parameter inference we find that within acceptable exposure times (O(10)
kg yr for cross sections just below the current exclusion limits) future directional detectors will
be sensitive to a wide range of stream velocities and densities. We also examine and discuss the
importance of the energy window of the detector.
I.
INTRODUCTION
Attempts to directly detect weakly interacting massive
particles (WIMPs) using liquid and solid-state detectors
have a long history. A key goal of this field is the detection of one (or more) of the ‘smoking gun’ signals of
direct detection: annual modulation [1, 2], material dependence (e.g. Ref. [3]) and direction dependence [4].
Much theoretical work has been carried out investigating
how the WIMP particle physics (mass and cross-section)
and astrophysics (local density and velocity distribution)
could be inferred from the energies (e.g. Refs. [3, 5, 6]) or
energies and directions [7–11] of WIMP induced nuclear
recoil events.
The directionality of the WIMP event rate is due to the
motion of the Sun with respect to the Galactic halo. We
are moving towards the constellation Cygnus and hence
the nuclear recoils are expected to be strongly peaked in
the direction opposite to this. The strength of the signal
is expected to be large; an anisotropic set of recoils can
be discriminated from isotropic backgrounds with as few
as 10 events [7, 12], while the peak recoil direction can
be measured, and the Galactic origin of the scattering
particle confirmed, with around 30 − 50 events [13, 14].
Practically directional detection is still in its early
stages, with a number of prototype detectors in the process of development. Directional detection is typically
achieved using gas time projection chambers (TPCs)
which contain low pressure gases such as CF4 , CS2 ,
C4 H10 or 3 He (see Ref. [15] for a review). After an interaction with a WIMP, the recoiling nucleus leaves an
ionisation track in its wake. The 3-dimensional reconstruction of the recoil track then allows the full energy
and direction dependence of the WIMP recoil spectrum
to be measured and, in principle, the WIMP velocity distribution can be inferred [16]. The detectors currently in
the prototype phase include DMTPC [17], DRIFT [18],
∗
†
[email protected]
[email protected]
MIMAC [19], and NEWAGE [20]. Directional detection offers several theoretical advantages over its nondirectional counterpart. Firstly and foremost, there are
no known backgrounds able to mimic the WIMP signal
in its characteristic peak direction. Furthermore it offers the only prospect for constraining the local velocity
distribution of the dark matter.
The dependence of the experimental signals on the
form of the local WIMP velocity distribution has attracted a lot of attention in the literature, as there is
significant uncertainty in the form of the velocity distribution, and the parameters on which it depends (for a
review see Ref. [21]). Data from direct detection experiments are typically compared using the standard halo
model (SHM), for which the velocity distribution is an
isotropic Maxwell-Boltzmann distribution. The shape of
the true local velocity distribution is expected to depart
significantly from this simple model [22–25]. However the
use of the SHM is at least somewhat justified for several
reasons. Firstly there is no fully agreed upon alternative
parametrisation of the velocity distribution and secondly
the expected deviations from the SHM are unlikely to
affect the analysis of data from the current generation of
non-directional detectors, provided the free parameters
are appropriately marginalised over [26].
Nevertheless, results from N-body and hydrodynamical simulations of galaxy formation show deviations from
the putative smooth and isotropic SHM, including tidal
streams [27], a dark disk [28, 29] and debris flows [30, 31],
which could be detectable by future experiments. In
this paper we focus on substructure in the form of tidal
streams, which result from the disruption of sub-halos
accreted by the Milky Way. There are hints that such
a feature may pass through the Solar neighbourhood. A
contiguous group of stars moving with a common velocity, that are possibly part of a tidal tail from the Sagittarius dwarf galaxy, have been observed nearby [32–35].
Moreover it has been argued that the dark matter content
of the stream is likely to be significantly more extended
than the stellar content and to have an offset of as much
as a few kpc [36].
We examine the capabilities of future directional dark
Nuclear Physics B Proceedings Supplement 00 (2014) 1–7
Nuclear
Physics B
Proceedings
Supplement
Observables in Higgsed Theories
Axel Maas1
arXiv:1410.2740v1 [hep-lat] 10 Oct 2014
University of Graz, Institute for Physics, Universit¨atsplatz 5, A-8010 Graz, Austria
Abstract
In gauge theories, observable quantities have to be gauge-invariant. In general, this requires composite operators, which usually
have substantially different properties, e. g. masses, than the elementary particles. Theories with a Higgs field, in which the
Brout-Englert-Higgs effect is active, provide an interesting exception to this rule. Due to an intricate mechanism, the Fr¨ohlichMorchio-Strocchi mechanism, the masses of the composite operators with the same J P quantum numbers, but modified internal
quantum numbers, have the same masses. This mechanism is supported using lattice gauge theory for the standard-model Higgs
sector, i. e. Yang-Mills-Higgs theory with gauge group SU(2) and custodial symmetry group SU(2). Furthermore, the extension to
the 2-Higgs-doublet-model is briefly discussed, and some preliminary results are presented.
Keywords:
Higgs sector, Brout-Englert-Higgs effect, Gauge symmetry, Observables, Lattice, 2-Higgs-doublet model
1. Introduction
which, as a non-Abelian charge, is not observable, and,
indeed, gauge-dependent [4]. These particles can therefore not be observable degrees of freedom [5–7].
The standard model Higgs sector contains essentially
an SU(2) Yang-Mills theory coupled to two flavors of
a complex Higgs field, forming the custodial doublet.
This theory therefore has a gauge group SU(2) and a
global custodial SU(2) symmetry. Most importantly, the
potential of the Higgs field is arranged such as to induce
a Brout-Englert-Higgs (BEH) effect, to provide both the
Higgs and the gauge bosons with normal masses.
The standard approach is to treat the theory perturbatively [1], in a suitable gauge [2]. Then, these elementary particles are interpreted as physical degrees of
freedom, and especially as observable final states in experiments. Though this is a very successful approach,
it poses the question why this is consistent. On the one
hand, it is possible to write down gauges in which, e. g.,
the W and Z gauge bosons, remain massless to all orders of perturbation theory [3]. On the other hand, both
the gauge bosons and the Higgs carry the weak charge
Indeed, a more formal treatment [7, 8] shows that
only composite operators, effectively bound states,
could be the appropriate gauge-invariant degrees of
freedom. Though this is a conceptually satisfying answer, this still requires to explain why then a perturbative description using the gauge-dependent degrees of
freedom is successful in describing the observable spectrum. Especially, the same arguments about unobservability can be made for QCD, where the quarks and gluons are indeed not the appropriate observed states2 .
This is resolved by the Fr¨ohlich-Morchio-Strocchi
(FMS) mechanism [7, 8], and the key element is the
BEH effect: Select a suitable gauge, i. e. one with
non-vanishing Higgs expectation value. Considering a
bound-state operator in the J P = 0+ quantum number
channel, and performing an expansion in the fluctuations η of the Higgs field φ around it vacuum expectation
Email address: [email protected] (Axel Maas)
work has been supported by the DFG under grant numbers
MA 3935/5-1, MA/3935/8-1 (Heisenberg program), and GK 1523/2.
1 This
2 The
1
situation in QED is different, due to its Abelian nature [4].
Feeling de Sitter
Andreas Albrecht∗
University of California at Davis, Department of Physics,
One Shields Ave, Davis CA 95616 USA
R. Holman†
arXiv:1410.2612v1 [hep-th] 9 Oct 2014
Physics Department, Carnegie Mellon University, Pittsburgh PA 15213 USA
Benoit J. Richard‡
University of California at Davis, Department of Physics,
One Shields Avenue, Davis CA 95616 USA
(Dated: October 13, 2014)
Abstract
We address the following question: To what extent can a quantum field tell if it has been placed
in de Sitter space? Our approach is to use the techniques of non-equilibrium quantum field theory
to compute the time evolution of a state which starts off in flat space for (conformal) times η < η0 ,
and then evolves in a de Sitter background turned on instantaneously at η = η0 . We find that the
answer depends on what quantities one examines. We study a range of them, all based on twopoint correlation functions, and analyze which ones approach the standard Bunch-Davies values
over time. The outcome of this analysis suggests that the nature of the equilibration process in
this system is similar to that in more familiar systems.
PACS numbers: 98.80.Qc, 11.25.Wx
∗
[email protected][email protected][email protected]
1
Separation of equilibrium part from an off-equilibrium state produced by
relativistic heavy ion collisions using a scalar dissipative strength
Takeshi Osada∗
arXiv:1409.6846v2 [nucl-th] 9 Oct 2014
Department of Physics, Faculty of Liberal Arts and Sciences,
Tokyo City University, Tamazutsumi 1-28-1, Setagaya-ku, Tokyo 158-8557, Japan
(Dated: October 10, 2014)
I have proposed a novel way to specify the initial conditions of a dissipative fluid dynamical
model for a given energy density ε and baryon number density n, which does not impose the
so-called Landau matching condition for an off-equilibrium state. The formulation is based on
irreducible tensor expansion theory for the off-equilibrium distribution function. By introducing a
scalar strength for an off-equilibrium state of γ ≡ Π/Peq (where Π is a bulk pressure and Peq is
the corresponding equilibrium pressure), it is possible to separate the corresponding equilibrium
energy density εeq and baryon number density neq from the given ε and n (equivalently determine
the corresponding temperature T and chemical potential µ), consisting of both kinetic theoretical
definitions and the thermodynamical stability condition. This separation is possible due to the
thermodynamical stability condition, which defines a kinetic relation between Π and δn ≡ n − neq
corresponding to an off-equilibrium equation of state specified by γ. For γ < 10−3 , the temperature
T and chemical potential µ are almost independent of γ, which means that the Landau matching
condition is approximately satisfied. However, this is not the case for γ & 10−3 .
PACS numbers: 24.10.Nz, 25.75.-q
I.
INTRODUCTION
Relativistic hydrodynamical models have been applied
to studies of matter that is produced in high-energy
hadron or nuclear collisions. Fluid dynamical descriptions, in particular, provide a simple picture of the spacetime evolution of the hot/dense matter produced by
ultra-relativistic heavy-ion collisions at RHIC and LHC
[1, 2]. It is to expected that this simple picture makes it
possible to investigate the strongly interacting quark and
gluon matter present at the initial stage of the collisions.
The fluid model assumes that there exist local thermal
quantities of the matter, and the pressure gradients of
the matter cause collective phenomena [3]. These expected phenomena have been successfully observed as an
elliptic flow coefficient v2 in the CERN SPS experiment
NA49 [4], at RHIC experiment [6–8] and recent ALICE
experiments in LHC [9] including the higher order flow
harmonic vn for (n = 3, 4). These have been observed as
a function of various characteristics, including the transverse momentum pT or rapidity y and so on. Hence, the
hydrodynamical model has been widely accepted because
of such experimental evidences. To investigate the properties of the quark and gluon matter created during such
ultra-relativistic heavy-ion collisions more precisely, it is
necessary to consider the effects of the viscosities and
corresponding dissipation [10]. These effects are introduced into the hydrodynamic simulations and a detailed
comparison between simulation and experimental data is
made (see, for example, Ref.[11]). However, as several
authors have noted, dissipative hydrodynamics is not yet
completely understood and there are issues associated
∗
[email protected]
to the determination of the hydrodynamical flow [12–14]
(see also, Ref. [15]). In this article, the Landau matching
(fitting) condition that is necessary to specify the initial
conditions with dissipative fluid dynamics is discussed.
This may be related to the issue of defining the local rest
frame [13].
The fundamental equations of relativistic fluid dynamics are defined by the conservation laws of energymomentum and the charge current vector (in this paper,
I assume the net baryon density as the conserved charge),
∂µ T µν (x) = 0,
∂µ N ν (x) = 0.
(1a)
(1b)
Here, T µν and N µ are respectively the energymomentum tensor and the conserved charge current at
a given point in space-time x, which can be obtained
by a coarse-graining procedure [16] with some finite size
(fluid cell size), lfluid . Hence, the fluid dynamical model
expressed as a coarse-graining theory describing macroscopic phenomena can be derived from the underlying
kinetic theory. In the case of a perfect fluid limit, the microscopic collision time scale τmicro is much shorter than
the macroscopic evolution time scale τmacro [17], thus
τmacro ≫ τmicro .
(2)
If the condition in eq.(2) is satisfied, the distribution
function instantaneously relaxes to its local equilibrium
form. In the local rest frame of the fluid, i.e., the frame
in which the fluid velocity is given by uµ (x) = (1, 0, 0, 0),
the local equilibrium distribution functions for particles
and for anti-particles are respectively given (within the
Draft version October 9, 2014
Preprint typeset using LATEX style emulateapj v. 5/2/11
WHAT COULD WE LEARN FROM A SHARPLY FALLING POSITRON FRACTION?
Timur Delahaye1,2,3 , Kumiko Kotera1,4 and Joseph Silk1,5,6
arXiv:1404.7546v2 [astro-ph.HE] 8 Oct 2014
Draft version October 9, 2014
ABSTRACT
Recent results from the AMS-02 data have confirmed that the cosmic ray positron fraction increases
with energy between 10 and 200 GeV. This quantity should not exceed 50%, and it is hence expected
that it will either converge towards 50% or fall. We study the possibility that future data may show
the positron fraction dropping down abruptly to the level expected with only secondary production,
and forecast the implications of such a feature in term of possible injection mechanisms that include
both dark matter and pulsars. Were a sharp steepening to be found, rather surprisingly, we conclude
that pulsar models would do at least as well as dark matter scenarios in terms of accounting for any
spectral cut-off.
Subject headings: cosmic rays, ISM: supernova remnants, dark matter, acceleration of particles, astroparticle physics
1. INTRODUCTION
The positron fraction, that is, the flux of cosmic-ray
positrons divided by the flux of electrons and positrons,
has attracted much interest since the publication of the
results of the PAMELA satellite (Adriani et al. 2009,
2013). PAMELA has indeed reported an anomalous rise
in the positron fraction with energy, between 10 and
200 GeV. These measurements have been confirmed recently by AMS-02 (Aguilar et al. 2013). The intriguing
question is what may happen next? The positron fraction must either saturate or decline. In the latter case,
how abrupt a decline might we expect? The naive expectation is that a dark matter self-annihilation interpretation, bounded by the particle rest mass, should inevitably
generate a sharper cut-off than any astrophysical model.
Antiparticles are rare among cosmic rays, and can
be produced as secondary particles by cosmic ray nuclei while they propagate and interact in the interstellar
medium. The sharp increase observed in the positron
fraction is however barely compatible with the most simple models of secondary production. Various alternatives
have been proposed, such as a modification of the propagation model (Katz et al. 2009; Blum et al. 2013), or primary positron production scenarios, with pulsars (e.g.,
Grasso et al. 2009; Hooper et al. 2009; Delahaye et al.
2010; Blasi & Amato 2011; Linden & Profumo 2013)
1 Institut d’Astrophysique de Paris UMR7095 – CNRS, Universit´
e Pierre & Marie Curie, 98 bis boulevard Arago F-75014
Paris, France.
2 LAPTH, Universit´
e de Savoie, CNRS; 9 chemin de Bellevue,
BP110, F-74941 Annecy-le-Vieux Cedex, France
3 Oskar Klein Centre for Cosmoparticle Physics, Department
of Physics, Stockholm University, SE-10691 Stockholm, Sweden
4 Department of Physics and Astronomy, University College
London, Gower Street, London WC1E 6BT, United Kingdom
5 The Johns Hopkins University, Department of Physics and
Astronomy, 3400 N. Charles Street, Baltimore, Maryland 21218,
USA
6 Beecroft Institute of Particle Astrophysics and Cosmology,
Department of Physics, University of Oxford, Denys Wilkinson
Building, 1 Keble Road, Oxford OX1 3RH, UK
or dark matter annihilation (e.g., Delahaye et al. 2008;
Arkani-Hamed et al. 2009; Cholis et al. 2009; Cirelli &
Panci 2009) as sources. The current data and the uncertainties inherent in the source models do not yet enable us to rule out these scenarios. It is however likely
that improved sensitivities at higher energies and a thorough measurement of the shape of the spectrum above
∼ 200 GeV will be able to constrain the models. This
question has been studied in earlier work (see for instance
Ioka 2010; Kawanaka et al. 2010; Pato et al. 2010; Mauro
et al. 2014); here we want to test more specifically the
possibility of a sharp drop of the positron fraction. An
original aspect of our work is to also convolve our results
with the cosmic-ray production parameter space for pulsars allowed by theory.
The AMS-02 data presents a hint of flattening in the
positron fraction above 250 GeV. Such a feature is expected, as the positron fraction should not exceed 0.5,
and hence it should either converge towards 0.5 or start
decreasing. We investigate in this paper the following
question: what constraints could we put on dark matter annihilation and primary pulsar scenarios if the next
AMS-02 data release were to show a sharply dropping
positron fraction? A sharp drop could be deemed natural if the positron excess originates from the annihilation
of dark matter particles with a mass of several hundred
GeV. However, we show in this work that such a feature
would be highly constraining in terms of dark matter scenarios. More unexpectedly, we demonstrate that pulsar
models could also lead to similar results for a narrow parameter space. Interestingly, we find that pulsars lying
in this parameter space happen to be the only ones that
would be astrophysically capable of contributing to the
pair flux at this level.
In this paper, we first describe our method and our
assumptions, then we analyse the dark matter and pulsar
scenarios respectively. Finally, we discuss our results.
2. METHOD
Hunting composite vector resonances at the LHC:
naturalness facing data
arXiv:1410.2883v1 [hep-ph] 10 Oct 2014
Davide Greco1 and Da Liu1,2
1
2
Institut de Th´eorie des Ph´enom`enes Physique, EPFL, Lausanne, Switzerland
State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of
Sciences, Beijing, People’s Republic of China
Abstract
We introduce a simplified low-energy effective Lagrangian description of the phenomenology of
heavy vector resonances in the minimal composite Higgs model, based on the coset SO(5)/SO(4),
analysing in detail their interaction with lighter top partners. Our construction is based on robust
assumptions on the symmetry structure of the theory and on plausible natural assumptions on
its dynamics. We apply our simplified approach to triplets in the representations (3, 1) and (1, 3)
and to singlets in the representation (1, 1) of SO(4). Our model captures the basic features of
their phenomenology in terms of a minimal set of free parameters and can be efficiently used
as a benchmark in the search for heavy spin-1 states at the LHC and at future colliders. We
devise an efficient semi-analytic method to convert experimental limits on σ × BR into bounds
on the free parameters of the theory and we recast the presently available 8 TeV LHC data on
experimental searches of spin-1 resonances as exclusion regions in the parameter space of the
models. These latter are conveniently interpreted as a test of the notion of naturalness.
Preprint typeset in JHEP style - HYPER VERSION
OUTP-14-15P
SI-HEP-2014-24
QFET-2014-18
arXiv:1410.2804v1 [hep-ph] 10 Oct 2014
Master integrals for the two-loop penguin
contribution in non-leptonic B-decays
Guido Bella and Tobias Huberb
a
Rudolf Peierls Centre for Theoretical Physics, University of Oxford,
1 Keble Road, Oxford OX1 3NP, United Kingdom
b
Theoretische Physik 1, Naturwissenschaftlich-Technische Fakult¨
at,
Universit¨
at Siegen, Walter-Flex-Straße 3, D-57068 Siegen, Germany
[email protected]
[email protected]
Abstract: We compute the master integrals that arise in the calculation of the leading penguin amplitudes in non-leptonic B-decays at two-loop order. The application of
differential equations in a canonical basis enables us to give analytic results for all master integrals in terms of iterated integrals with rational weight functions. It is the first
application of this method to the case of two different internal masses.
Keywords: B-physics, QCD, NNLO Computations.
Heavy Higgs Decays into Sfermions in the Complex MSSM:
arXiv:1410.2787v1 [hep-ph] 10 Oct 2014
A Full One-Loop Analysis
S. Heinemeyer1∗ and C. Schappacher2†‡
1
2
Instituto de F´ısica de Cantabria (CSIC-UC), Santander, Spain
Institut f¨
ur Theoretische Physik, Karlsruhe Institute of Technology,
D–76128 Karlsruhe, Germany
Abstract
For the search for additional Higgs bosons in the Minimal Supersymmetric Standard
Model (MSSM) as well as for future precision analyses in the Higgs sector a precise
knowledge of their decay properties is mandatory. We evaluate all two-body decay
modes of the heavy Higgs bosons into sfermions in the MSSM with complex parameters
(cMSSM). The evaluation is based on a full one-loop calculation of all decay channels,
also including hard QED and QCD radiation. The dependence of the heavy Higgs
bosons on the relevant cMSSM parameters is analyzed numerically. We find sizable
contributions to many partial decay widths. They are roughly of O(15%) of the treelevel results, but can go up to 30% or higher. The size of the electroweak one-loop
corrections can be as large as the QCD corrections. The full one-loop contributions
are important for the correct interpretation of heavy Higgs boson search results at the
LHC and, if kinematically allowed, at a future linear e+ e− collider. The evaluation of
the branching ratios of the heavy Higgs bosons will be implemented into the Fortran
code FeynHiggs.
∗
email: [email protected]
email: [email protected]
‡
former address
†
arXiv:1410.2785v1 [hep-ph] 10 Oct 2014
New features of MadAnalysis 5 for analysis design
and reinterpretation
Eric Conte1 , B´
eranger Dumont2,3 , Benjamin Fuks4,5 and Thibaut
5
Schmitt
´
Groupe de Recherche de Physique des Hautes Energies
(GRPHE), Universit´e de
Haute-Alsace, IUT Colmar, 34 rue du Grillenbreit BP 50568, 68008 Colmar Cedex, France
(2)
Center for Theoretical Physics of the Universe, Institute for Basic Science (IBS), Daejeon
305-811, Korea
(3)
LPSC, Universit´e Grenoble-Alpes, CNRS/IN2P3, 53 Avenue des Martyrs, F-38026
Grenoble, France
(4)
CERN, PH-TH, CH-1211 Geneva 23, Switzerland
(5)
Institut Pluridisciplinaire Hubert Curien/D´epartement Recherches Subatomiques,
Universit´e de Strasbourg/CNRS-IN2P3, 23 Rue du Loess, F-67037 Strasbourg, France
(1)
E-mail: [email protected], [email protected], [email protected],
[email protected]
Abstract. We present MadAnalysis 5, an analysis package dedicated to phenomenological
studies of simulated collisions occurring in high-energy physics experiments. Within this
framework, users are invited, through a user-friendly Python interpreter, to implement physics
analyses in a very simple manner. A C++ code is then automatically generated, compiled and
executed. Very recently, the expert mode of the program has been extended so that analyses
with multiple signal/control regions can be handled. Additional observables have also been
included, and an interface to several fast detector simulation packages has been developed, one
of them being a tune of the Delphes 3 software. As a result, a recasting of existing ATLAS
and CMS analyses can be achieved straightforwardly.
1. MadAnalysis 5 in a nutshell
While both LHC experiments are currently pushing limits on particles beyond the Standard
Model to higher and higher scales, tremendous progress has been made in the development of
Monte Carlo event generators and satellite programs, in particular with respect to precision
predictions, new physics implementations and event analysis. The public MadAnalysis 5
package [1, 2] addresses this last aspect and provides a framework for analyzing events generated
either at the parton-level, after hadronization or after detector simulation. The program hence
allows one to efficiently design and recast LHC analyses. The user can in this way investigate
any physics model and determine the LHC sensitivity to its signatures by either conceiving
a novel analysis or recasting existing ATLAS and CMS studies. MadAnalysis 5 starts by
reading event samples as generated by any Monte Carlo event generator that satisfies communityendorsed output formats. Next, it applies selection cuts and computes differential distributions
as requested by the user. In its normal mode of running, the results are represented under
the form of histograms and cut-flow tables that are collected within Html and LATEX reports,
whilst in the expert mode of the program, they are presented in text files compliant with the
Nuclear Physics B
Proceedings
Supplement
Nuclear Physics B Proceedings Supplement 00 (2014) 1–3
HIP-2014-21/TH
Higgs(es) in triplet extended supersymmetric standard model at the LHC
Priyotosh Bandyopadhyaya,∗, Katri Huitua , Aslı Sabancı Kec¸elia
arXiv:1410.2762v1 [hep-ph] 10 Oct 2014
a Department
of Physics, University of Helsinki and Helsinki Institute of Physics, P.O.Box 64 (Gustaf H¨allstr¨omin katu 2), FIN-00014, Finland
Abstract
The recent discovery of the ∼ 125 GeV Higgs boson by Atlas and CMS experiments has set strong constraints
on parameter space of the minimal supersymmetric model (MSSM). However these constraints can be weakened by
enlarging the Higgs sector by adding a triplet chiral superfield. In particular, we focus on the Y = 0 triplet extension
of MSSM, known as TESSM, where the electroweak contributions to the lightest Higgs mass are also important and
comparable with the strong contributions. We discuss this in the context of the observed Higgs like particle around
125 GeV and also look into the status of other Higgs bosons in the model. We calculate the Br(Bs → X s γ) in this
model where three physical charged Higgs bosons and three charginos contribute. We show that the doublet-triplet
mixing in charged Higgses plays an important role in constraining the parameter space. In this context we also discuss
the phenomenology of light charged Higgs probing H1± − W ∓ − Z coupling at the LHC.
Keywords: TESSM, Higgs boson, Charged Higgs, LHC
1. Introduction
The discovery of the Higgs boson [1, 2] has given us
new window in understanding the electroweak symmetry breaking (EWSB) and its underlying theory. The
experimental results for Higgs production and decay
channels are in very good agreement with the Standard
Model (SM) predictions [3, 4] but there are still room
for other models. Such models often are motivated by
the problems of the SM such as naturalness, the lack
of neutrino masses and a dark matter candidate. Supersymmetry (SUSY) removes the famous hierarchy problem and also gives dark matter candidates.
In the minimal supersymmetric extension of the SM
(MSSM), the lightest neutral Higgs mass is mh ≤ mZ
at the tree-level and the measured Higgs mass can only
be achieved with the help of large radiative corrections.
∗ Corresponding
author
Email addresses: [email protected]
(Priyotosh Bandyopadhyay), [email protected] (Katri
Huitu), [email protected] (Aslı Sabancı Kec¸eli)
The observation of the ∼ 125 GeV Higgs thus either
leads to a large mixing between the third generation
squarks and/or soft masses greater than a few TeV [5, 6].
This pushes the SUSY mass limit to & a few TeV for the
most constrained scenarios [7, 8].
Here we consider the triplet supersymmetric extension of Standard Model (TESSM). This extension helps
to accommodate a light Higgs boson around 125 GeV
without pushing the SUSY mass scale very high. This
happens for two reasons, firstly due to the extra tree
level contribution from the triplet and secondly it also
contributes substantially at 1-loop level. TESSM has an
extended Higgs sector which constitutes more than one
neutral as well as charged Higgs bosons. In this contribution we will report our analysis of the case where the
lightest CP-even neutral scalar is the candidate discovered Higgs boson around ∼ 125 GeV.
In section 2 we discuss the model briefly and in section 3 we present the status of the ∼ 125 GeVHiggs in
this model after the Higgs discovery. In section 4 we
discuss the charged Higgs phenomenology at the LHC
and conclude in section 5.
Pseudoscalar meson photoproduction on nucleon target
G.H. Arakelyan1 , C. Merino2 and Yu.M. Shabelski3
arXiv:1410.2754v1 [hep-ph] 10 Oct 2014
1
A.I.Alikhanyan Scientific Laboratory
Yerevan Physics Institute
Yerevan 0036, Armenia
e-mail: [email protected]
2
Departamento de F´ısica de Part´ıculas, Facultade de F´ısica
and Instituto Galego de F´ısica de Altas Enerx´ıas (IGFAE)
Universidade de Santiago de Compostela
15782 Santiago de Compostela
Galiza, Spain
e-mail: [email protected]
3
Petersburg Nuclear Physics Institute
NCR Kurchatov Institute
Gatchina, St.Petersburg 188350, Russia
e-mail: [email protected]
Abstract
We consider the photoproduction of secondary mesons in the framework of the
Quark-Gluon String model. At relatively low energies, not only cylindrical, but also
planar diagrams have to be accounted for. To estimate the significant contribution of
planar diagrams in γp collisions at rather low energies, we have used the expression
obtained from the corresponding phenomenological expression for πp collisions. The
results obtained by the model are compared to the existing SLAC experimental
data. The model predictions for light meson production at HERMES energies are
also presented.
1
WITS-CTP-145
Evolution of Quark Masses and Flavour Mixings in the 2UED
arXiv:1410.2719v1 [hep-ph] 10 Oct 2014
Ammar. Abdalgabar 1 and A. S. Cornell 2
National Institute for Theoretical Physics;
School of Physics,University of the Witwatersrand
Wits 2050, South Africa
Abstract
The evolution equations of the Yukawa couplings and quark mixings are performed for the one-loop
renormalisation group equations in six-dimensional models compactified in different possible ways to yield
standard four space-time dimensions. Different possibilities for the matter fields are discussed, that is where
they are in the bulk or localised to the brane. These two possibilities give rise to quite similar behaviours
when studying the evolution of the Yukawa couplings and mass ratios. We find that for both scenarios, valid
up to the unification scale, significant corrections are observed.
Keywords: Fermion masses, extra Dimension. Beyond Standard Model
1
Introduction
A theory of fermion masses and the associated mixing angles is unexplained in the Standard Model (SM)
providing an interesting puzzle and a likely window to physics beyond the SM. In the SM one of the main
issues is to understand the origin of quark and lepton masses, or the apparent hierarchy of family masses
and quark mixing angles. Perhaps if we understood this we would also know the origins of CP violation. A
clear feature of the fermion mass spectrum is [1, 2]
mu ≪ mc ≪ mt , md ≪ ms ≪ mb , me ≪ mµ ≪ mτ .
(1.1)
Apart from the discovery of the Higgs boson at the Large Hadron Collider (LHC), another important goal
of the LHC is to explore the new physics that may be present at the TeV scale. Among these models those
with extra spatial dimensions offer many possibilities for model building and TeV scale physics scenarios
which can be constrained or explored. As such, there have been many efforts to understand the fermion
1
2
[email protected]
[email protected]
CP-violating top-Higgs coupling and top polarisation
in the diphoton decay of the thj channel at the LHC
J. Yue
arXiv:1410.2701v1 [hep-ph] 10 Oct 2014
ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University
of Sydney, NSW 2006, Australia
E-mail: [email protected]
Abstract: We study the observability of non-standard top Yukawa couplings in the
pp → t(→ `νb)h(→ γγ)j channel at 14 TeV high luminosity LHC (HL-LHC). The small
diphoton branching ratio is enhanced in the presence of CP-violating top-Higgs interactions.
We find that the signal significance may reach 2.7σ and 7.7σ for the mixed and pseudoscalar
cases respectively, with the modulus of the top-Higgs interaction taking the Standard Model
value, yt = ytSM . Furthermore, the different couplings modify the polarisation of the top
quark, and can be distinguished via the asymmetries in spin correlations of the t-decaying
leptons.
arXiv:1410.2696v1 [hep-ph] 10 Oct 2014
General formulation of the sector-improved residue
subtraction
David Heymes
Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,
Sommerfeldstr. 16, 52074 Aachen, Germany
E-mail: [email protected]
Abstract. The main theoretical tool to provide precise predictions for scattering cross sections
of strongly interacting particles is perturbative QCD. Starting at next-to-leading order (NLO)
the calculation suffers from unphysical IR-divergences that cancel in the final result. At NLO
there exist general subtraction algorithms to treat these divergences during a calculation. Since
the LHC demands for more precise theoretical predictions, general subtraction methods at nextto-next-to-leading order (NNLO) are needed.
This proceeding outlines the four-dimensional formulation of the sector improved residue
subtraction. The subtraction scheme STRIPPER and in particular its extension to arbitrary
multiplicities is explained. Therefore, it furnishes a general framework for the calculation of
NNLO cross sections in perturbative QCD.
1. Introduction
We are interested in predicting the hadronic cross section, which is known to factorize into
parton distribution functions and the partonic cross section
X ZZ 1
σh1 h2 (P1 , P2 ) =
dx1 dx2 fa/h1 (x1 , µF ) fb/h2 (x2 , µF ) σ
ˆab (x1 P1 , x2 P2 ; αs (µR ), µR , µF ) .
ab
0
(1)
The summation runs over initial state partons {a, b}, i.e. massless quarks and gluons. The
parton distribution function fa/h1 (x1 , µF ) can be understood as the probability density for
finding parton a inside hadron h1 carrying the momentum p1 = x1 P1 . Parton distribution
functions are non-perturbative objects and have to be determined experimentally.
In contrast, the partonic cross section σ
ˆab can be calculated using perturbative QCD. Including
terms up to next-to-next-to-leading order, its expansion in the strong coupling αs reads
(0)
(1)
(2)
σ
ˆab = σ
ˆab + σ
ˆab + σ
ˆab .
The leading order contribution is known as the Born approximation and reads
Z
1 1
(0)
(0)
B
dΦn hM(0)
σ
ˆab = σ
ˆab =
n |Mn i Fn ,
2ˆ
s Nab
(2)
(3)
where n is the number of final state particles and dΦn the phase space measure. The
measurement functions Fn defines the infrared safe observable and prevents n massless partons
Search for intrinsic charm in vector boson production accompanied by heavy
flavor jets
P-H.Beauchemin1 , V.A.Bednyakov2 , G. I. Lykasov2 , Yu.Yu. Stepanenko2,3
1
arXiv:1410.2616v1 [hep-ph] 9 Oct 2014
2
Tufts University, Medford, MA, USA
Joint Institute for Nuclear Research - Dubna 141980, Moscow region, Russia
3
Gomel State University, Gomel 246019, Republic of Belarus
Abstract
Up to now, the existence of intrinsic (or valence-like) heavy quark component of
the proton distribution
√ functions has not yet been confirmed or rejected. The LHC
with pp-collisions at s = 7–13 TeV can supply us with extra unique information
concerning this hypothesis. On the basis of our theoretical studies, it is demonstrated
that investigations of the intrinsic heavy quark contributions look very promising
in processes like pp → Z/W + c(b) + X. A ratio of Z+ heavy jets over W + heavy
jets differential cross setion as a function of the leading jet transverse momentum
is proposed to maximize the sensitivity to the intrinsic charm component of the
proton.
1.
Introduction
Parton distribution functions (PDFs) give the probability of finding in a proton a
quark or a gluon (parton) with a certain longitudinal momentum fraction at a given
resolution scale. The PDF fa (x, µ) is thus a function of the proton momentum
fraction x carried by the parton a at the QCD scale µ. For small values of µ, corresponding to long distance scales of less than 1/µ0 , the PDFs cannot be calculated
from the first principles of QCD (although some progresses have been made using
the lattice methods [1]). The unknown functions fa (x, µ0 ) must be found empirically from a phenomenological model fitted to a large variety of data at µ > µ0 in a
”QCD global analysis” [2, 3]. The PDF fa (x, µ) at higher resolution scale µ > µ0 can
however be calculated from fa (x, µ0 ) within the perturbative QCD using DGLAP
Q2 -evolution equations [4].
The limitation in the accuracy at which PDFs are determined constitutes an
important source of systematic uncertainty for Standard Model measurements and
for multiple searches for New Physics at hadron colliders. The LHC facility is a
laboratory where PDFs can be studied and their description improved. Inclusive
W ± and Z-boson production measurements performed with the ATLAS detector
have, for example, introduced a novel sensitivity to the strange quark density at
x ∼ 0.01 [5].
1
Half-lives of α decay from natural nuclides and from
superheavy elements
arXiv:1410.2664v1 [nucl-th] 10 Oct 2014
Yibin Qiana,b,∗, Zhongzhou Rena,b,c,d,∗
a
Key Laboratory of Modern Acoustics and Department of Physics, Nanjing University,
Nanjing 210093, China
b
Joint Center of Nuclear Science and Technology, Nanjing University, Nanjing 210093,
China
c
Kavli Institute for Theoretical Physics China, Beijing 100190, China
d
Center of Theoretical Nuclear Physics, National Laboratory of Heavy-Ion Accelerator,
Lanzhou 730000, China
Abstract
Recently, experimental researches on the α decay with long lifetime are one
of hot topics in the contemporary nuclear physics [e.g. N. Kinoshita et al.
(2012) [2] and J. W. Beeman et al. (2012) [4]]. In this study, we have systematically investigated the extremely long-lived α-decaying nuclei within a
generalized density-dependent cluster model involving the experimental nuclear charge radii. In detail, the important density distribution of daughter
nuclei is deduced from the corresponding experimental charge radii, leading to an improved α-core potential in the quantum tunneling calculation of
α-decay width. Besides the excellent agreement between theory and experiment, predictions on half-lives of possible candidates for natural α emitters
are made for future experimental detections. In addition, the recently confirmed α-decay chain from 294 117 is well described, including the attractive
long-lived α-decaying 270 Db, i.e., a positive step towards the “island of stability” in the superheavy mass region.
Keywords:
PACS: 23.60.+e, 21.60.Gx, 21.10.Ft, 21.10.Tg
Corresponding author.
Email addresses: [email protected] (Yibin Qian ), [email protected]
(Zhongzhou Ren )
∗
Preprint submitted to Physics Letters B
October 13, 2014
Giant and pigmy dipole resonances in 4 He, 16,22 O, and
nucleon-nucleon interactions
40
Ca from chiral
S. Bacca,1, 2 N. Barnea,3 G. Hagen,4, 5 M. Miorelli,1, 6 G. Orlandini,7, 8 and T. Papenbrock5, 4
1
TRIUMF, 4004 Wesbrook Mall, Vancouver, BC, V6T 2A3, Canada
Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
3
Racah Institute of Physics, Hebrew University, 91904, Jerusalem
4
Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
5
Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996, USA
6
Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
7
Dipartimento di Fisica, Universit`
a di Trento, Via Sommarive 14, I-38123 Trento, Italy
8
Istituto Nazionale di Fisica Nucleare, TIFPA, Via Sommarive 14, I-38123 Trento, Italy
(Dated: October 10, 2014)
arXiv:1410.2258v1 [nucl-th] 8 Oct 2014
2
We combine the coupled-cluster method and the Lorentz integral transform for the computation of
inelastic reactions into the continuum. We show that the bound–state–like equation characterizing
the Lorentz integral transform method can be reformulated based on extensions of the coupledcluster equation-of-motion method, and we discuss strategies for viable numerical solutions. Starting
from a chiral nucleon-nucleon interaction at next-to-next-to-next-to-leading order, we compute the
giant dipole resonances of 4 He, 16,22 O and 40 Ca, truncating the coupled-cluster equation-of-motion
method at the two-particle-two-hole excitation level. Within this scheme, we find a low-lying E1
strength in the neutron-rich 22 O nucleus, which compares fairly well with data from [Leistenschneider
et al. Phys. Rev. Lett. 86, 5442 (2001)]. We also compute the electric dipole polariziability in
40
Ca. Deficiencies of the employed Hamiltonian lead to overbinding, too small charge radii and a
too small electric dipole polarizability in 40 Ca.
PACS numbers: 21.60.De, 24.10.Cn, 24.30.Cz, 25.20.-x
I.
INTRODUCTION
The inelastic response of an A-body system due to its
interaction with perturbative probes is a basic property
in quantum physics. It contains important information
about the dynamical structure of the system. For example, in the history of nuclear physics the study of photonuclear reactions lead to the discovery of giant dipole
resonances (GDR) [1], originally interpreted as a collective motion of protons against neutrons. For neutronrich nuclei far from the valley of stability, such collective
modes exhibit a fragmentation with low-lying strength,
also called pigmy dipole resonances (see, e.g., Ref. [2]),
typically interpreted as due to the oscillation of the excess neutrons against a core made by all other nucleons.
Recently, progress was made in computing properties of medium mass and some heavy nuclei from
first-principles using a variety of methods such as
the coupled-cluster method [3–5], in-medium similarityrenormalization-group method [6, 7], the self-consistent
Green’s function method [8, 9], and lattice effective field
theory [10]. Although most of these methods focused on
bound-state properties of nuclei, there has been progress
in describing physics of unbound nuclear states and elastic neutron/proton scattering with application to the
neutron rich helium [11] and calcium isotopes [5, 12, 13].
However, these continuum calculations are currently limited to states that are of single-particle like structure and
below multi-nucleon breakup thresholds.
The microscopic calculation of final–state continuum
wave functions of nuclei in the medium-mass regime con-
stitutes still an open theoretical problem. This is due
to the fact that at a given continuum energy the wave
function of the system has many different components
(channels) corresponding to all its partitions into different fragments of various sizes. Working in configuration
space one has to find the solution of the many–body
Schr¨odinger equation with the proper boundary conditions in all channels. The implementation of the boundary conditions constitutes the main obstacle to the practical solution of the problem. In momentum space the
difficulty translates essentially into the proliferation with
A of the Yakubovsky equations as well as into the complicated treatment of the poles of the resolvents. For
example, the difficulties in dealing with the three-body
break-up channel for 4 He have been overcome only very
recently [14].
The Lorentz integral transform (LIT) method [15] allows to avoid the complications of a continuum calculation, because it reduces the difficulties to those typical of a bound–state problem, where the boundary
conditions are much easier to implement. The LIT
method has been applied to systems with A ≤ 7 using
the Faddeev method [16], the correlated hypersphericalharmonics method [17–21], the EIHH method [22–25] or
the NCSM [26, 27]. All those methods, however, have
been introduced for dealing with typical few–body systems and cannot be easily extended to medium–heavy
nuclei. Therefore it is desirable to formulate the LIT
method in the framework of other many–body methods.
In the present work we present such a formulation for
the coupled–cluster (CC) method [3, 28–32], which is a
arXiv:1410.2714v1 [nucl-ex] 10 Oct 2014
First determination of the one-proton induced
Non-Mesonic Weak Decay width
of p-shell Λ-Hypernuclei
The FINUDA Collaboration, M. Agnelloa,b , L. Benussic , M. Bertanic ,
H.C. Bhangd , G. Bonomie,f , E. Bottag,b,∗, T. Bressanig,b , S. Bufalinob ,
D. Calvob , P. Camerinih,i , B. Dalenaj,k,1, F. De Morig,b , G. D’Erasmoj,k,
A. Feliciellob , A. Filippib , H. Fujiokal , P. Gianottic , N. Grioni ,
V. Lucherinic , S. Marcellog,b , T. Nagael , H. Outam , V. Paticchioj , S. Pianoi ,
R. Ruih,i , G. Simonettij,k , A. Zenonie,f
a
DISAT, Politecnico di Torino, corso Duca degli Abruzzi 24, Torino, Italy
b
INFN Sezione di Torino, via P. Giuria 1, Torino, Italy
c
Laboratori Nazionali di Frascati dell’INFN, via. E. Fermi, 40, Frascati, Italy
d
Department of Physics, Seoul National University, 151-742 Seoul, South Korea
e
Dipartimento di Ingegneria Meccanica e Industriale,
Universit`
a di Brescia, via Branze 38, Brescia, Italy
f
INFN Sezione di Pavia, via Bassi 6, Pavia, Italy
g
Dipartimento di Fisica, Universit`
a di Torino, via P. Giuria 1, Torino, Italy
h
Dipartimento di Fisica, Universit`
a di Trieste, via Valerio 2, Trieste, Italy
i
INFN Sezione di Trieste, via Valerio 2, Trieste, Italy
j
INFN Sezione di Bari, via Amendola 173, Bari, Italy
k
Dipartimento di Fisica Universit`
a di Bari, via Amendola 173, Bari, Italy
l
Department of Physics, Kyoto University, Sakyo-ku, Kyoto Japan
m
RIKEN, Wako, Saitama 351-0198, Japan
Abstract
Previous studies of proton and neutron spectra from Non-Mesonic Weak Decay of eight Λ-Hypernuclei (A = 5÷16) have been revisited. New values of the
ratio of the two-nucleon and the one-proton induced decay widths, Γ2N /Γp ,
are obtained from single proton spectra, Γ2N /Γp = 0.50 ± 0.24, and from neu+0.05sys
tron and proton coincidence spectra, Γ2N /Γp = 0.36 ± 0.14stat −0.04sys
, in full
agreement with previously published ones. With these values, a method is
∗
1
Corresponding author. E-mail address: [email protected]
Now at CEA/SACLAY, DSM/Irfu/SACM F-91191 Gif-sur-Yvette France
Preprint submitted to Physics Letters B
October 13, 2014