diff --git a/appendix.md b/appendix.md new file mode 100644 index 0000000..c421d75 --- /dev/null +++ b/appendix.md @@ -0,0 +1,138 @@ +# Appendix + + +: Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qW + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.10406 | 0.14720 | 0.07371 | 0.08165 | +| 1.8 | 0.07656 | 0.10800 | 0.05441 | 0.04114 | +| 2.0 | 0.05422 | 0.07605 | 0.03879 | 0.04043 | +| 2.5 | 0.02430 | 0.03408 | 0.01747 | 0.04052 | +| 3.0 | 0.01262 | 0.01775 | 0.00904 | 0.02109 | +| 3.5 | 0.00703 | 0.00992 | 0.00502 | 0.00399 | +| 4.0 | 0.00424 | 0.00603 | 0.00300 | 0.00172 | +| 4.5 | 0.00355 | 0.00478 | 0.00273 | 0.00249 | +| 5.0 | 0.00269 | 0.00357 | 0.00211 | 0.00240 | +| 6.0 | 0.00103 | 0.00160 | 0.00068 | 0.00062 | +| 7.0 | 0.00063 | 0.00105 | 0.00039 | 0.00086 | + + +: Cross Section limits using 2016 data and the deep boosted tagger for the decay to qW + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.17750 | 0.25179 | 0.12572 | 0.38242 | +| 1.8 | 0.11125 | 0.15870 | 0.07826 | 0.11692 | +| 2.0 | 0.08188 | 0.11549 | 0.05799 | 0.09528 | +| 2.5 | 0.03328 | 0.04668 | 0.02373 | 0.03653 | +| 3.0 | 0.01648 | 0.02338 | 0.01181 | 0.01108 | +| 3.5 | 0.00840 | 0.01195 | 0.00593 | 0.00683 | +| 4.0 | 0.00459 | 0.00666 | 0.00322 | 0.00342 | +| 4.5 | 0.00276 | 0.00412 | 0.00190 | 0.00366 | +| 5.0 | 0.00177 | 0.00271 | 0.00118 | 0.00401 | +| 6.0 | 0.00110 | 0.00175 | 0.00071 | 0.00155 | +| 7.0 | 0.00065 | 0.00108 | 0.00041 | 0.00108 | + + +: Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qZ + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.08687 | 0.12254 | 0.06174 | 0.06987 | +| 1.8 | 0.06719 | 0.09477 | 0.04832 | 0.03424 | +| 2.0 | 0.04734 | 0.06640 | 0.03405 | 0.03310 | +| 2.5 | 0.01867 | 0.02619 | 0.01343 | 0.03214 | +| 3.0 | 0.01043 | 0.01463 | 0.00744 | 0.01773 | +| 3.5 | 0.00596 | 0.00840 | 0.00426 | 0.00347 | +| 4.0 | 0.00353 | 0.00500 | 0.00250 | 0.00140 | +| 4.5 | 0.00233 | 0.00335 | 0.00164 | 0.00181 | +| 5.0 | 0.00157 | 0.00231 | 0.00110 | 0.00188 | +| 6.0 | 0.00082 | 0.00126 | 0.00054 | 0.00049 | +| 7.0 | 0.00050 | 0.00083 | 0.00031 | 0.00066 | + + +: Cross Section limits using 2016 data and deep boosted tagger for the decay to qZ + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.16687 | 0.23805 | 0.11699 | 0.35999 | +| 1.8 | 0.12750 | 0.17934 | 0.09138 | 0.12891 | +| 2.0 | 0.09062 | 0.12783 | 0.06474 | 0.09977 | +| 2.5 | 0.03391 | 0.04783 | 0.02422 | 0.03754 | +| 3.0 | 0.01781 | 0.02513 | 0.01277 | 0.01159 | +| 3.5 | 0.00949 | 0.01346 | 0.00678 | 0.00741 | +| 4.0 | 0.00494 | 0.00711 | 0.00349 | 0.00362 | +| 4.5 | 0.00293 | 0.00429 | 0.00203 | 0.00368 | +| 5.0 | 0.00188 | 0.00284 | 0.00127 | 0.00426 | +| 6.0 | 0.00102 | 0.00161 | 0.00066 | 0.00155 | +| 7.0 | 0.00053 | 0.00085 | 0.00034 | 0.00085 | + + +: Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qW + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.05703 | 0.07999 | 0.04088 | 0.03366 | +| 1.8 | 0.03953 | 0.05576 | 0.02833 | 0.04319 | +| 2.0 | 0.02844 | 0.03989 | 0.02045 | 0.04755 | +| 2.5 | 0.01270 | 0.01781 | 0.00913 | 0.01519 | +| 3.0 | 0.00658 | 0.00923 | 0.00473 | 0.01218 | +| 3.5 | 0.00376 | 0.00529 | 0.00269 | 0.00474 | +| 4.0 | 0.00218 | 0.00309 | 0.00156 | 0.00114 | +| 4.5 | 0.00132 | 0.00188 | 0.00094 | 0.00068 | +| 5.0 | 0.00084 | 0.00122 | 0.00060 | 0.00059 | +| 6.0 | 0.00044 | 0.00066 | 0.00030 | 0.00041 | +| 7.0 | 0.00022 | 0.00036 | 0.00014 | 0.00043 | + + +: Cross Section limits using the combined data and the deep boosted tagger for the decay to qW + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.06656 | 0.09495 | 0.04698 | 0.12374 | +| 1.8 | 0.04281 | 0.06141 | 0.03001 | 0.05422 | +| 2.0 | 0.03297 | 0.04650 | 0.02363 | 0.04658 | +| 2.5 | 0.01328 | 0.01868 | 0.00950 | 0.01109 | +| 3.0 | 0.00650 | 0.00917 | 0.00464 | 0.00502 | +| 3.5 | 0.00338 | 0.00479 | 0.00241 | 0.00408 | +| 4.0 | 0.00182 | 0.00261 | 0.00129 | 0.00127 | +| 4.5 | 0.00107 | 0.00156 | 0.00074 | 0.00123 | +| 5.0 | 0.00068 | 0.00102 | 0.00046 | 0.00149 | +| 6.0 | 0.00038 | 0.00060 | 0.00024 | 0.00034 | +| 7.0 | 0.00021 | 0.00035 | 0.00013 | 0.00046 | + + + +: Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qZ + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.05125 | 0.07188 | 0.03667 | 0.02993 | +| 1.8 | 0.03547 | 0.04989 | 0.02551 | 0.03614 | +| 2.0 | 0.02523 | 0.03539 | 0.01815 | 0.04177 | +| 2.5 | 0.01059 | 0.01485 | 0.00761 | 0.01230 | +| 3.0 | 0.00576 | 0.00808 | 0.00412 | 0.01087 | +| 3.5 | 0.00327 | 0.00460 | 0.00234 | 0.00425 | +| 4.0 | 0.00190 | 0.00269 | 0.00136 | 0.00097 | +| 4.5 | 0.00119 | 0.00168 | 0.00084 | 0.00059 | +| 5.0 | 0.00077 | 0.00110 | 0.00054 | 0.00051 | +| 6.0 | 0.00039 | 0.00057 | 0.00026 | 0.00036 | +| 7.0 | 0.00019 | 0.00031 | 0.00013 | 0.00036 | + + +: Cross Section limits using the combined data and deep boosted tagger for the decay to qZ + +| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | +|------------|-----------------|------------------|------------------|-----------------| +| 1.6 | 0.07719 | 0.10949 | 0.05467 | 0.14090 | +| 1.8 | 0.05297 | 0.07493 | 0.03752 | 0.06690 | +| 2.0 | 0.03875 | 0.05466 | 0.02768 | 0.05855 | +| 2.5 | 0.01512 | 0.02126 | 0.01080 | 0.01160 | +| 3.0 | 0.00773 | 0.01088 | 0.00554 | 0.00548 | +| 3.5 | 0.00400 | 0.00565 | 0.00285 | 0.00465 | +| 4.0 | 0.00211 | 0.00301 | 0.00149 | 0.00152 | +| 4.5 | 0.00118 | 0.00172 | 0.00082 | 0.00128 | +| 5.0 | 0.00073 | 0.00108 | 0.00050 | 0.00161 | +| 6.0 | 0.00039 | 0.00060 | 0.00025 | 0.00036 | +| 7.0 | 0.00021 | 0.00034 | 0.00013 | 0.00045 | diff --git a/appendix.tex b/appendix.tex new file mode 100644 index 0000000..d375de2 --- /dev/null +++ b/appendix.tex @@ -0,0 +1,219 @@ +\newpage +\hypertarget{appendix}{% +\section*{Appendix}\label{appendix}} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using 2016 data and the N-subjettiness +tagger for the decay to qW}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.10406 & 0.14720 & 0.07371 & 0.08165\tabularnewline +1.8 & 0.07656 & 0.10800 & 0.05441 & 0.04114\tabularnewline +2.0 & 0.05422 & 0.07605 & 0.03879 & 0.04043\tabularnewline +2.5 & 0.02430 & 0.03408 & 0.01747 & 0.04052\tabularnewline +3.0 & 0.01262 & 0.01775 & 0.00904 & 0.02109\tabularnewline +3.5 & 0.00703 & 0.00992 & 0.00502 & 0.00399\tabularnewline +4.0 & 0.00424 & 0.00603 & 0.00300 & 0.00172\tabularnewline +4.5 & 0.00355 & 0.00478 & 0.00273 & 0.00249\tabularnewline +5.0 & 0.00269 & 0.00357 & 0.00211 & 0.00240\tabularnewline +6.0 & 0.00103 & 0.00160 & 0.00068 & 0.00062\tabularnewline +7.0 & 0.00063 & 0.00105 & 0.00039 & 0.00086\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using 2016 data and the deep boosted +tagger for the decay to qW}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.17750 & 0.25179 & 0.12572 & 0.38242\tabularnewline +1.8 & 0.11125 & 0.15870 & 0.07826 & 0.11692\tabularnewline +2.0 & 0.08188 & 0.11549 & 0.05799 & 0.09528\tabularnewline +2.5 & 0.03328 & 0.04668 & 0.02373 & 0.03653\tabularnewline +3.0 & 0.01648 & 0.02338 & 0.01181 & 0.01108\tabularnewline +3.5 & 0.00840 & 0.01195 & 0.00593 & 0.00683\tabularnewline +4.0 & 0.00459 & 0.00666 & 0.00322 & 0.00342\tabularnewline +4.5 & 0.00276 & 0.00412 & 0.00190 & 0.00366\tabularnewline +5.0 & 0.00177 & 0.00271 & 0.00118 & 0.00401\tabularnewline +6.0 & 0.00110 & 0.00175 & 0.00071 & 0.00155\tabularnewline +7.0 & 0.00065 & 0.00108 & 0.00041 & 0.00108\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using 2016 data and the N-subjettiness +tagger for the decay to qZ}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.08687 & 0.12254 & 0.06174 & 0.06987\tabularnewline +1.8 & 0.06719 & 0.09477 & 0.04832 & 0.03424\tabularnewline +2.0 & 0.04734 & 0.06640 & 0.03405 & 0.03310\tabularnewline +2.5 & 0.01867 & 0.02619 & 0.01343 & 0.03214\tabularnewline +3.0 & 0.01043 & 0.01463 & 0.00744 & 0.01773\tabularnewline +3.5 & 0.00596 & 0.00840 & 0.00426 & 0.00347\tabularnewline +4.0 & 0.00353 & 0.00500 & 0.00250 & 0.00140\tabularnewline +4.5 & 0.00233 & 0.00335 & 0.00164 & 0.00181\tabularnewline +5.0 & 0.00157 & 0.00231 & 0.00110 & 0.00188\tabularnewline +6.0 & 0.00082 & 0.00126 & 0.00054 & 0.00049\tabularnewline +7.0 & 0.00050 & 0.00083 & 0.00031 & 0.00066\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using 2016 data and deep boosted tagger +for the decay to qZ}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.16687 & 0.23805 & 0.11699 & 0.35999\tabularnewline +1.8 & 0.12750 & 0.17934 & 0.09138 & 0.12891\tabularnewline +2.0 & 0.09062 & 0.12783 & 0.06474 & 0.09977\tabularnewline +2.5 & 0.03391 & 0.04783 & 0.02422 & 0.03754\tabularnewline +3.0 & 0.01781 & 0.02513 & 0.01277 & 0.01159\tabularnewline +3.5 & 0.00949 & 0.01346 & 0.00678 & 0.00741\tabularnewline +4.0 & 0.00494 & 0.00711 & 0.00349 & 0.00362\tabularnewline +4.5 & 0.00293 & 0.00429 & 0.00203 & 0.00368\tabularnewline +5.0 & 0.00188 & 0.00284 & 0.00127 & 0.00426\tabularnewline +6.0 & 0.00102 & 0.00161 & 0.00066 & 0.00155\tabularnewline +7.0 & 0.00053 & 0.00085 & 0.00034 & 0.00085\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using the combined data and the +N-subjettiness tagger for the decay to qW}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.05703 & 0.07999 & 0.04088 & 0.03366\tabularnewline +1.8 & 0.03953 & 0.05576 & 0.02833 & 0.04319\tabularnewline +2.0 & 0.02844 & 0.03989 & 0.02045 & 0.04755\tabularnewline +2.5 & 0.01270 & 0.01781 & 0.00913 & 0.01519\tabularnewline +3.0 & 0.00658 & 0.00923 & 0.00473 & 0.01218\tabularnewline +3.5 & 0.00376 & 0.00529 & 0.00269 & 0.00474\tabularnewline +4.0 & 0.00218 & 0.00309 & 0.00156 & 0.00114\tabularnewline +4.5 & 0.00132 & 0.00188 & 0.00094 & 0.00068\tabularnewline +5.0 & 0.00084 & 0.00122 & 0.00060 & 0.00059\tabularnewline +6.0 & 0.00044 & 0.00066 & 0.00030 & 0.00041\tabularnewline +7.0 & 0.00022 & 0.00036 & 0.00014 & 0.00043\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using the combined data and the deep +boosted tagger for the decay to qW}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.06656 & 0.09495 & 0.04698 & 0.12374\tabularnewline +1.8 & 0.04281 & 0.06141 & 0.03001 & 0.05422\tabularnewline +2.0 & 0.03297 & 0.04650 & 0.02363 & 0.04658\tabularnewline +2.5 & 0.01328 & 0.01868 & 0.00950 & 0.01109\tabularnewline +3.0 & 0.00650 & 0.00917 & 0.00464 & 0.00502\tabularnewline +3.5 & 0.00338 & 0.00479 & 0.00241 & 0.00408\tabularnewline +4.0 & 0.00182 & 0.00261 & 0.00129 & 0.00127\tabularnewline +4.5 & 0.00107 & 0.00156 & 0.00074 & 0.00123\tabularnewline +5.0 & 0.00068 & 0.00102 & 0.00046 & 0.00149\tabularnewline +6.0 & 0.00038 & 0.00060 & 0.00024 & 0.00034\tabularnewline +7.0 & 0.00021 & 0.00035 & 0.00013 & 0.00046\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using the combined data and the +N-subjettiness tagger for the decay to qZ}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.05125 & 0.07188 & 0.03667 & 0.02993\tabularnewline +1.8 & 0.03547 & 0.04989 & 0.02551 & 0.03614\tabularnewline +2.0 & 0.02523 & 0.03539 & 0.01815 & 0.04177\tabularnewline +2.5 & 0.01059 & 0.01485 & 0.00761 & 0.01230\tabularnewline +3.0 & 0.00576 & 0.00808 & 0.00412 & 0.01087\tabularnewline +3.5 & 0.00327 & 0.00460 & 0.00234 & 0.00425\tabularnewline +4.0 & 0.00190 & 0.00269 & 0.00136 & 0.00097\tabularnewline +4.5 & 0.00119 & 0.00168 & 0.00084 & 0.00059\tabularnewline +5.0 & 0.00077 & 0.00110 & 0.00054 & 0.00051\tabularnewline +6.0 & 0.00039 & 0.00057 & 0.00026 & 0.00036\tabularnewline +7.0 & 0.00019 & 0.00031 & 0.00013 & 0.00036\tabularnewline +\bottomrule +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\caption{Cross Section limits using the combined data and deep boosted +tagger for the decay to qZ}\tabularnewline +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Mass {[}TeV{]} & Exp. limit {[}pb{]} & Upper limit {[}pb{]} & Lower +limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline +\midrule +\endhead +1.6 & 0.07719 & 0.10949 & 0.05467 & 0.14090\tabularnewline +1.8 & 0.05297 & 0.07493 & 0.03752 & 0.06690\tabularnewline +2.0 & 0.03875 & 0.05466 & 0.02768 & 0.05855\tabularnewline +2.5 & 0.01512 & 0.02126 & 0.01080 & 0.01160\tabularnewline +3.0 & 0.00773 & 0.01088 & 0.00554 & 0.00548\tabularnewline +3.5 & 0.00400 & 0.00565 & 0.00285 & 0.00465\tabularnewline +4.0 & 0.00211 & 0.00301 & 0.00149 & 0.00152\tabularnewline +4.5 & 0.00118 & 0.00172 & 0.00082 & 0.00128\tabularnewline +5.0 & 0.00073 & 0.00108 & 0.00050 & 0.00161\tabularnewline +6.0 & 0.00039 & 0.00060 & 0.00025 & 0.00036\tabularnewline +7.0 & 0.00021 & 0.00034 & 0.00013 & 0.00045\tabularnewline +\bottomrule +\end{longtable} diff --git a/figures/cb_fit.pdf b/figures/cb_fit.pdf index e9a0da7..a99e47f 100644 Binary files a/figures/cb_fit.pdf and b/figures/cb_fit.pdf differ diff --git a/figures/cb_fit_old.pdf b/figures/cb_fit_old.pdf new file mode 100644 index 0000000..e9a0da7 Binary files /dev/null and b/figures/cb_fit_old.pdf differ diff --git a/figures/limit_comp_2018.pdf b/figures/limit_comp_2018.pdf new file mode 100644 index 0000000..48b9239 Binary files /dev/null and b/figures/limit_comp_2018.pdf differ diff --git a/make.sh b/make.sh index 300e8b6..24c299c 100755 --- a/make.sh +++ b/make.sh @@ -1,6 +1,6 @@ #!/bin/sh -pandoc thesis.md -o thesis.tex --biblatex --bibliography=bibliography.bib -N --listings --pdf-engine=lualatex -s --filter pandoc-crossref +pandoc thesis.md -o thesis.tex --biblatex --bibliography=bibliography.bib -N --listings --pdf-engine=lualatex -s --filter pandoc-crossref --include-after-body=appendix.tex lualatex thesis biber thesis lualatex thesis diff --git a/thesis.aux b/thesis.aux index 3b17b55..45ffa77 100644 --- a/thesis.aux +++ b/thesis.aux @@ -23,74 +23,28 @@ \@writefile{lof}{\boolfalse {citerequest}\boolfalse {citetracker}\boolfalse {pagetracker}\boolfalse {backtracker}\relax } \@writefile{lot}{\boolfalse {citerequest}\boolfalse {citetracker}\boolfalse {pagetracker}\boolfalse {backtracker}\relax } \babel@aux{british}{} -\BKM@entry{id=1,dest={73656374696F6E2E31},srcline={133},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030495C3030306E5C303030745C303030725C3030306F5C303030645C303030755C303030635C303030745C303030695C3030306F5C3030306E} +\BKM@entry{id=1,dest={73656374696F6E2E31},srcline={132},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030495C3030306E5C303030745C303030725C3030306F5C303030645C303030755C303030635C303030745C303030695C3030306F5C3030306E} +\abx@aux@cite{PREV_RESEARCH} +\abx@aux@segm{0}{0}{PREV_RESEARCH} \@writefile{toc}{\contentsline {section}{\numberline {1}Introduction}{1}{section.1}\protected@file@percent } \newlabel{introduction}{{1}{1}{Introduction}{section.1}{}} -\BKM@entry{id=2,dest={73656374696F6E2E32},srcline={173},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030306F5C303030725C303030655C303030745C303030695C303030635C303030615C3030306C5C3030305C3034305C303030625C303030615C303030635C3030306B5C303030675C303030725C3030306F5C303030755C3030306E5C30303064} -\BKM@entry{id=3,dest={73756273656374696F6E2E322E31},srcline={182},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030745C303030615C3030306E5C303030645C303030615C303030725C303030645C3030305C3034305C3030306D5C3030306F5C303030645C303030655C3030306C} -\@writefile{toc}{\contentsline {section}{\numberline {2}Theoretical background}{2}{section.2}\protected@file@percent } -\newlabel{theoretical-background}{{2}{2}{Theoretical background}{section.2}{}} +\BKM@entry{id=2,dest={73656374696F6E2E32},srcline={181},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030306F5C303030725C303030655C303030745C303030695C303030635C303030615C3030306C5C3030305C3034305C3030306D5C3030306F5C303030745C303030695C303030765C303030615C303030745C303030695C3030306F5C3030306E} +\BKM@entry{id=3,dest={73756273656374696F6E2E322E31},srcline={190},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030745C303030615C3030306E5C303030645C303030615C303030725C303030645C3030305C3034305C3030306D5C3030306F5C303030645C303030655C3030306C} +\@writefile{toc}{\contentsline {section}{\numberline {2}Theoretical motivation}{2}{section.2}\protected@file@percent } +\newlabel{theoretical-motivation}{{2}{2}{Theoretical motivation}{section.2}{}} \@writefile{toc}{\contentsline {subsection}{\numberline {2.1}Standard model}{2}{subsection.2.1}\protected@file@percent } -\newlabel{standard-model}{{2.1}{2}{Standard model}{subsection.2.1}{}} +\newlabel{sec:sm}{{2.1}{2}{Standard model}{subsection.2.1}{}} \@writefile{lof}{\contentsline {figure}{\numberline {1}{\ignorespaces Elementary particles of the Standard Model and their mass charge and spin.\relax }}{2}{figure.caption.2}\protected@file@percent } \providecommand*\caption@xref[2]{\@setref\relax\@undefined{#1}} \newlabel{fig:sm}{{1}{2}{Elementary particles of the Standard Model and their mass charge and spin.\relax }{figure.caption.2}{}} -\BKM@entry{id=4,dest={73756273756273656374696F6E2E322E312E31},srcline={262},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030515C303030755C303030615C3030306E5C303030745C303030755C3030306D5C3030305C3034305C303030435C303030685C303030725C3030306F5C3030306D5C3030306F5C303030645C303030795C3030306E5C303030615C3030306D5C303030695C303030635C3030305C3034305C303030625C303030615C303030635C3030306B5C303030675C303030725C3030306F5C303030755C3030306E5C30303064} -\BKM@entry{id=5,dest={73756273756273656374696F6E2E322E312E32},srcline={294},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030685C3030306F5C303030725C303030745C303030635C3030306F5C3030306D5C303030695C3030306E5C303030675C303030735C3030305C3034305C3030306F5C303030665C3030305C3034305C303030745C303030685C303030655C3030305C3034305C303030535C303030745C303030615C3030306E5C303030645C303030615C303030725C303030645C3030305C3034305C3030304D5C3030306F5C303030645C303030655C3030306C} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.1.1}Quantum Chromodynamic background}{3}{subsubsection.2.1.1}\protected@file@percent } -\newlabel{sec:qcdbg}{{2.1.1}{3}{Quantum Chromodynamic background}{subsubsection.2.1.1}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.1.2}Shortcomings of the Standard Model}{3}{subsubsection.2.1.2}\protected@file@percent } -\newlabel{shortcomings-of-the-standard-model}{{2.1.2}{3}{Shortcomings of the Standard Model}{subsubsection.2.1.2}{}} -\BKM@entry{id=6,dest={73756273656374696F6E2E322E32},srcline={327},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030455C303030785C303030635C303030695C303030745C303030655C303030645C3030305C3034305C303030715C303030755C303030615C303030725C3030306B5C3030305C3034305C303030735C303030745C303030615C303030745C303030655C30303073} -\@writefile{lof}{\contentsline {figure}{\numberline {2}{\ignorespaces Two examples of QCD processes resulting in two jets.\relax }}{4}{figure.caption.3}\protected@file@percent } -\newlabel{fig:qcdfeynman}{{2}{4}{Two examples of QCD processes resulting in two jets.\relax }{figure.caption.3}{}} +\BKM@entry{id=4,dest={73756273756273656374696F6E2E322E312E31},srcline={289},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030685C3030306F5C303030725C303030745C303030635C3030306F5C3030306D5C303030695C3030306E5C303030675C303030735C3030305C3034305C3030306F5C303030665C3030305C3034305C303030745C303030685C303030655C3030305C3034305C303030535C303030745C303030615C3030306E5C303030645C303030615C303030725C303030645C3030305C3034305C3030304D5C3030306F5C303030645C303030655C3030306C} +\BKM@entry{id=5,dest={73756273656374696F6E2E322E32},srcline={322},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030455C303030785C303030635C303030695C303030745C303030655C303030645C3030305C3034305C303030715C303030755C303030615C303030725C3030306B5C3030305C3034305C303030735C303030745C303030615C303030745C303030655C30303073} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.1.1}Shortcomings of the Standard Model}{4}{subsubsection.2.1.1}\protected@file@percent } +\newlabel{shortcomings-of-the-standard-model}{{2.1.1}{4}{Shortcomings of the Standard Model}{subsubsection.2.1.1}{}} \@writefile{toc}{\contentsline {subsection}{\numberline {2.2}Excited quark states}{4}{subsection.2.2}\protected@file@percent } \newlabel{sec:qs}{{2.2}{4}{Excited quark states}{subsection.2.2}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces Feynman diagram showing a possible decay of a q* particle to a W boson and a quark with the W boson also decaying to two quarks.\relax }}{5}{figure.caption.4}\protected@file@percent } -\newlabel{fig:qsfeynman}{{3}{5}{Feynman diagram showing a possible decay of a q* particle to a W boson and a quark with the W boson also decaying to two quarks.\relax }{figure.caption.4}{}} -\BKM@entry{id=7,dest={73656374696F6E2E33},srcline={374},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030455C303030785C303030705C303030655C303030725C303030695C3030306D5C303030655C3030306E5C303030745C303030615C3030306C5C3030305C3034305C303030535C303030655C303030745C303030755C30303070} -\BKM@entry{id=8,dest={73756273656374696F6E2E332E31},srcline={380},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304C5C303030615C303030725C303030675C303030655C3030305C3034305C303030485C303030615C303030645C303030725C3030306F5C3030306E5C3030305C3034305C303030435C3030306F5C3030306C5C3030306C5C303030695C303030645C303030655C30303072} -\abx@aux@cite{website} -\abx@aux@segm{0}{0}{website} -\BKM@entry{id=9,dest={73756273656374696F6E2E332E32},srcline={414},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306D5C303030705C303030615C303030635C303030745C3030305C3034305C3030304D5C303030755C3030306F5C3030306E5C3030305C3034305C303030535C3030306F5C3030306C5C303030655C3030306E5C3030306F5C303030695C30303064} -\@writefile{toc}{\contentsline {section}{\numberline {3}Experimental Setup}{6}{section.3}\protected@file@percent } -\newlabel{experimental-setup}{{3}{6}{Experimental Setup}{section.3}{}} -\@writefile{toc}{\contentsline {subsection}{\numberline {3.1}Large Hadron Collider}{6}{subsection.3.1}\protected@file@percent } -\newlabel{large-hadron-collider}{{3.1}{6}{Large Hadron Collider}{subsection.3.1}{}} -\@writefile{toc}{\contentsline {subsection}{\numberline {3.2}Compact Muon Solenoid}{6}{subsection.3.2}\protected@file@percent } -\newlabel{compact-muon-solenoid}{{3.2}{6}{Compact Muon Solenoid}{subsection.3.2}{}} -\BKM@entry{id=10,dest={73756273756273656374696F6E2E332E322E31},srcline={432},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306F5C303030725C303030645C303030695C3030306E5C303030615C303030745C303030655C3030305C3034305C303030635C3030306F5C3030306E5C303030765C303030655C3030306E5C303030745C303030695C3030306F5C3030306E5C30303073} -\BKM@entry{id=11,dest={73756273756273656374696F6E2E332E322E32},srcline={458},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030745C303030725C303030615C303030635C3030306B5C303030695C3030306E5C303030675C3030305C3034305C303030735C303030795C303030735C303030745C303030655C3030306D} -\BKM@entry{id=12,dest={73756273756273656374696F6E2E332E322E33},srcline={468},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030655C3030306C5C303030655C303030635C303030745C303030725C3030306F5C3030306D5C303030615C303030675C3030306E5C303030655C303030745C303030695C303030635C3030305C3034305C303030635C303030615C3030306C5C3030306F5C303030725C303030695C3030306D5C303030655C303030745C303030655C30303072} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.1}Coordinate conventions}{7}{subsubsection.3.2.1}\protected@file@percent } -\newlabel{coordinate-conventions}{{3.2.1}{7}{Coordinate conventions}{subsubsection.3.2.1}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Coordinate conventions of the CMS illustrating the use of \(\miteta \) and \(\mitphi \). The Z axis is in beam direction. Taken from https://inspirehep.net/record/1236817/plots\relax }}{7}{figure.caption.5}\protected@file@percent } -\newlabel{fig:cmscoords}{{4}{7}{Coordinate conventions of the CMS illustrating the use of \(\eta \) and \(\phi \). The Z axis is in beam direction. Taken from https://inspirehep.net/record/1236817/plots\relax }{figure.caption.5}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.2}The tracking system}{7}{subsubsection.3.2.2}\protected@file@percent } -\newlabel{the-tracking-system}{{3.2.2}{7}{The tracking system}{subsubsection.3.2.2}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.3}The electromagnetic calorimeter}{7}{subsubsection.3.2.3}\protected@file@percent } -\newlabel{the-electromagnetic-calorimeter}{{3.2.3}{7}{The electromagnetic calorimeter}{subsubsection.3.2.3}{}} -\BKM@entry{id=13,dest={73756273756273656374696F6E2E332E322E34},srcline={481},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030685C303030615C303030645C303030725C3030306F5C3030306E5C303030695C303030635C3030305C3034305C303030635C303030615C3030306C5C3030306F5C303030725C303030695C3030306D5C303030655C303030745C303030655C30303072} -\BKM@entry{id=14,dest={73756273756273656374696F6E2E332E322E35},srcline={490},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030735C3030306F5C3030306C5C303030655C3030306E5C3030306F5C303030695C30303064} -\BKM@entry{id=15,dest={73756273756273656374696F6E2E332E322E36},srcline={498},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C3030306D5C303030755C3030306F5C3030306E5C3030305C3034305C303030735C303030795C303030735C303030745C303030655C3030306D} -\BKM@entry{id=16,dest={73756273756273656374696F6E2E332E322E37},srcline={508},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030545C303030725C303030695C303030675C303030675C303030655C303030725C3030305C3034305C303030735C303030795C303030735C303030745C303030655C3030306D} -\BKM@entry{id=17,dest={73756273756273656374696F6E2E332E322E38},srcline={522},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030505C303030615C303030725C303030745C303030695C303030635C3030306C5C303030655C3030305C3034305C303030465C3030306C5C3030306F5C303030775C3030305C3034305C303030615C3030306C5C303030675C3030306F5C303030725C303030695C303030745C303030685C3030306D} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.4}The hadronic calorimeter}{8}{subsubsection.3.2.4}\protected@file@percent } -\newlabel{the-hadronic-calorimeter}{{3.2.4}{8}{The hadronic calorimeter}{subsubsection.3.2.4}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.5}The solenoid}{8}{subsubsection.3.2.5}\protected@file@percent } -\newlabel{the-solenoid}{{3.2.5}{8}{The solenoid}{subsubsection.3.2.5}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.6}The muon system}{8}{subsubsection.3.2.6}\protected@file@percent } -\newlabel{the-muon-system}{{3.2.6}{8}{The muon system}{subsubsection.3.2.6}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.7}The Trigger system}{8}{subsubsection.3.2.7}\protected@file@percent } -\newlabel{the-trigger-system}{{3.2.7}{8}{The Trigger system}{subsubsection.3.2.7}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.8}The Particle Flow algorithm}{8}{subsubsection.3.2.8}\protected@file@percent } -\newlabel{the-particle-flow-algorithm}{{3.2.8}{8}{The Particle Flow algorithm}{subsubsection.3.2.8}{}} -\BKM@entry{id=18,dest={73756273656374696F6E2E332E33},srcline={542},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304A5C303030655C303030745C3030305C3034305C303030635C3030306C5C303030755C303030735C303030745C303030655C303030725C303030695C3030306E5C30303067} -\@writefile{toc}{\contentsline {subsection}{\numberline {3.3}Jet clustering}{9}{subsection.3.3}\protected@file@percent } -\newlabel{jet-clustering}{{3.3}{9}{Jet clustering}{subsection.3.3}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces Comparision of the \(k_t\), Cambridge/Aachen, SISCone and anti-\(k_t\) algorithms clustering a sample parton-level event with many random soft \enquote {ghosts}. Taken from\relax }}{10}{figure.caption.6}\protected@file@percent } -\newlabel{fig:antiktcomparision}{{5}{10}{Comparision of the \(k_t\), Cambridge/Aachen, SISCone and anti-\(k_t\) algorithms clustering a sample parton-level event with many random soft \enquote {ghosts}. Taken from\relax }{figure.caption.6}{}} -\BKM@entry{id=19,dest={73656374696F6E2E34},srcline={581},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304D5C303030655C303030745C303030685C3030306F5C303030645C3030305C3034305C3030306F5C303030665C3030305C3034305C303030615C3030306E5C303030615C3030306C5C303030795C303030735C303030695C30303073} +\@writefile{lof}{\contentsline {figure}{\numberline {2}{\ignorespaces Feynman diagram showing a possible decay of a q* particle to a W boson and a quark with the W boson also decaying to two quarks.\relax }}{4}{figure.caption.3}\protected@file@percent } +\newlabel{fig:qsfeynman}{{2}{4}{Feynman diagram showing a possible decay of a q* particle to a W boson and a quark with the W boson also decaying to two quarks.\relax }{figure.caption.3}{}} \abx@aux@cite{QSTAR_THEORY} \abx@aux@segm{0}{0}{QSTAR_THEORY} \gdef \LT@i {\LT@entry @@ -98,86 +52,164 @@ {1}{71.97462pt}\LT@entry {1}{69.63869pt}\LT@entry {1}{65.97462pt}} -\abx@aux@cite{PREV_RESEARCH} \abx@aux@segm{0}{0}{PREV_RESEARCH} -\BKM@entry{id=20,dest={73756273656374696F6E2E342E31},srcline={651},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030695C303030675C3030306E5C303030615C3030306C5C3030305C3034305C303030615C3030306E5C303030645C3030305C3034305C303030425C303030615C303030635C3030306B5C303030675C303030725C3030306F5C303030755C3030306E5C303030645C3030305C3034305C3030306D5C3030306F5C303030645C303030655C3030306C5C3030306C5C303030695C3030306E5C30303067} +\BKM@entry{id=6,dest={73756273756273656374696F6E2E322E322E31},srcline={410},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030515C303030755C303030615C3030306E5C303030745C303030755C3030306D5C3030305C3034305C303030435C303030685C303030725C3030306F5C3030306D5C3030306F5C303030645C303030795C3030306E5C303030615C3030306D5C303030695C303030635C3030305C3034305C303030625C303030615C303030635C3030306B5C303030675C303030725C3030306F5C303030755C3030306E5C30303064} +\@writefile{lot}{\contentsline {table}{\numberline {1}{\ignorespaces Branching ratios of the decaying q* particle.\relax }}{5}{table.1}\protected@file@percent } +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.2.1}Quantum Chromodynamic background}{5}{subsubsection.2.2.1}\protected@file@percent } +\newlabel{sec:qcdbg}{{2.2.1}{5}{Quantum Chromodynamic background}{subsubsection.2.2.1}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {3}{\ignorespaces Two examples of QCD processes resulting in two jets.\relax }}{6}{figure.caption.4}\protected@file@percent } +\newlabel{fig:qcdfeynman}{{3}{6}{Two examples of QCD processes resulting in two jets.\relax }{figure.caption.4}{}} +\BKM@entry{id=7,dest={73656374696F6E2E33},srcline={442},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030455C303030785C303030705C303030655C303030725C303030695C3030306D5C303030655C3030306E5C303030745C303030615C3030306C5C3030305C3034305C303030535C303030655C303030745C303030755C30303070} +\BKM@entry{id=8,dest={73756273656374696F6E2E332E31},srcline={448},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304C5C303030615C303030725C303030675C303030655C3030305C3034305C303030485C303030615C303030645C303030725C3030306F5C3030306E5C3030305C3034305C303030435C3030306F5C3030306C5C3030306C5C303030695C303030645C303030655C30303072} +\abx@aux@cite{website} +\abx@aux@segm{0}{0}{website} +\BKM@entry{id=9,dest={73756273656374696F6E2E332E32},srcline={483},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306D5C303030705C303030615C303030635C303030745C3030305C3034305C3030304D5C303030755C3030306F5C3030306E5C3030305C3034305C303030535C3030306F5C3030306C5C303030655C3030306E5C3030306F5C303030695C30303064} +\@writefile{toc}{\contentsline {section}{\numberline {3}Experimental Setup}{7}{section.3}\protected@file@percent } +\newlabel{experimental-setup}{{3}{7}{Experimental Setup}{section.3}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.1}Large Hadron Collider}{7}{subsection.3.1}\protected@file@percent } +\newlabel{large-hadron-collider}{{3.1}{7}{Large Hadron Collider}{subsection.3.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.2}Compact Muon Solenoid}{7}{subsection.3.2}\protected@file@percent } +\newlabel{compact-muon-solenoid}{{3.2}{7}{Compact Muon Solenoid}{subsection.3.2}{}} +\BKM@entry{id=10,dest={73756273756273656374696F6E2E332E322E31},srcline={501},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306F5C303030725C303030645C303030695C3030306E5C303030615C303030745C303030655C3030305C3034305C303030635C3030306F5C3030306E5C303030765C303030655C3030306E5C303030745C303030695C3030306F5C3030306E5C30303073} +\BKM@entry{id=11,dest={73756273756273656374696F6E2E332E322E32},srcline={530},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030745C303030725C303030615C303030635C3030306B5C303030695C3030306E5C303030675C3030305C3034305C303030735C303030795C303030735C303030745C303030655C3030306D} +\BKM@entry{id=12,dest={73756273756273656374696F6E2E332E322E33},srcline={540},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030655C3030306C5C303030655C303030635C303030745C303030725C3030306F5C3030306D5C303030615C303030675C3030306E5C303030655C303030745C303030695C303030635C3030305C3034305C303030635C303030615C3030306C5C3030306F5C303030725C303030695C3030306D5C303030655C303030745C303030655C30303072} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.1}Coordinate conventions}{8}{subsubsection.3.2.1}\protected@file@percent } +\newlabel{coordinate-conventions}{{3.2.1}{8}{Coordinate conventions}{subsubsection.3.2.1}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {4}{\ignorespaces Coordinate conventions of the CMS illustrating the use of \(\miteta \) and \(\mitphi \). The Z axis is in beam direction. Taken from https://inspirehep.net/record/1236817/plots\relax }}{8}{figure.caption.5}\protected@file@percent } +\newlabel{fig:cmscoords}{{4}{8}{Coordinate conventions of the CMS illustrating the use of \(\eta \) and \(\phi \). The Z axis is in beam direction. Taken from https://inspirehep.net/record/1236817/plots\relax }{figure.caption.5}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.2}The tracking system}{8}{subsubsection.3.2.2}\protected@file@percent } +\newlabel{the-tracking-system}{{3.2.2}{8}{The tracking system}{subsubsection.3.2.2}{}} +\BKM@entry{id=13,dest={73756273756273656374696F6E2E332E322E34},srcline={554},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030685C303030615C303030645C303030725C3030306F5C3030306E5C303030695C303030635C3030305C3034305C303030635C303030615C3030306C5C3030306F5C303030725C303030695C3030306D5C303030655C303030745C303030655C30303072} +\BKM@entry{id=14,dest={73756273756273656374696F6E2E332E322E35},srcline={563},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030735C3030306F5C3030306C5C303030655C3030306E5C3030306F5C303030695C30303064} +\BKM@entry{id=15,dest={73756273756273656374696F6E2E332E322E36},srcline={571},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C3030306D5C303030755C3030306F5C3030306E5C3030305C3034305C303030735C303030795C303030735C303030745C303030655C3030306D} +\BKM@entry{id=16,dest={73756273756273656374696F6E2E332E322E37},srcline={581},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030545C303030725C303030695C303030675C303030675C303030655C303030725C3030305C3034305C303030735C303030795C303030735C303030745C303030655C3030306D} +\BKM@entry{id=17,dest={73756273756273656374696F6E2E332E322E38},srcline={595},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030545C303030685C303030655C3030305C3034305C303030505C303030615C303030725C303030745C303030695C303030635C3030306C5C303030655C3030305C3034305C303030465C3030306C5C3030306F5C303030775C3030305C3034305C303030615C3030306C5C303030675C3030306F5C303030725C303030695C303030745C303030685C3030306D} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.3}The electromagnetic calorimeter}{9}{subsubsection.3.2.3}\protected@file@percent } +\newlabel{the-electromagnetic-calorimeter}{{3.2.3}{9}{The electromagnetic calorimeter}{subsubsection.3.2.3}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.4}The hadronic calorimeter}{9}{subsubsection.3.2.4}\protected@file@percent } +\newlabel{the-hadronic-calorimeter}{{3.2.4}{9}{The hadronic calorimeter}{subsubsection.3.2.4}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.5}The solenoid}{9}{subsubsection.3.2.5}\protected@file@percent } +\newlabel{the-solenoid}{{3.2.5}{9}{The solenoid}{subsubsection.3.2.5}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.6}The muon system}{9}{subsubsection.3.2.6}\protected@file@percent } +\newlabel{the-muon-system}{{3.2.6}{9}{The muon system}{subsubsection.3.2.6}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.7}The Trigger system}{9}{subsubsection.3.2.7}\protected@file@percent } +\newlabel{the-trigger-system}{{3.2.7}{9}{The Trigger system}{subsubsection.3.2.7}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {3.2.8}The Particle Flow algorithm}{9}{subsubsection.3.2.8}\protected@file@percent } +\newlabel{the-particle-flow-algorithm}{{3.2.8}{9}{The Particle Flow algorithm}{subsubsection.3.2.8}{}} +\BKM@entry{id=18,dest={73756273656374696F6E2E332E33},srcline={615},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304A5C303030655C303030745C3030305C3034305C303030635C3030306C5C303030755C303030735C303030745C303030655C303030725C303030695C3030306E5C30303067} +\abx@aux@cite{ANTIKT} +\abx@aux@segm{0}{0}{ANTIKT} +\abx@aux@segm{0}{0}{ANTIKT} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3}Jet clustering}{10}{subsection.3.3}\protected@file@percent } +\newlabel{jet-clustering}{{3.3}{10}{Jet clustering}{subsection.3.3}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {5}{\ignorespaces Comparison of the \(k_t\), Cambridge/Aachen, SISCone and anti-\(k_t\) algorithms clustering a sample parton-level event with many random soft \enquote {ghosts}. Taken from \autocite {ANTIKT}\relax }}{11}{figure.caption.6}\protected@file@percent } +\newlabel{fig:antiktcomparison}{{5}{11}{Comparison of the \(k_t\), Cambridge/Aachen, SISCone and anti-\(k_t\) algorithms clustering a sample parton-level event with many random soft \enquote {ghosts}. Taken from \autocite {ANTIKT}\relax }{figure.caption.6}{}} +\BKM@entry{id=19,dest={73656374696F6E2E34},srcline={660},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304D5C303030655C303030745C303030685C3030306F5C303030645C3030305C3034305C3030306F5C303030665C3030305C3034305C303030615C3030306E5C303030615C3030306C5C303030795C303030735C303030695C30303073} +\abx@aux@segm{0}{0}{PREV_RESEARCH} +\abx@aux@segm{0}{0}{PREV_RESEARCH} +\BKM@entry{id=20,dest={73756273656374696F6E2E342E31},srcline={700},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030695C303030675C3030306E5C303030615C3030306C5C3030305C3034305C303030615C3030306E5C303030645C3030305C3034305C303030425C303030615C303030635C3030306B5C303030675C303030725C3030306F5C303030755C3030306E5C303030645C3030305C3034305C3030306D5C3030306F5C303030645C303030655C3030306C5C3030306C5C303030695C3030306E5C30303067} \abx@aux@segm{0}{0}{QSTAR_THEORY} -\@writefile{toc}{\contentsline {section}{\numberline {4}Method of analysis}{11}{section.4}\protected@file@percent } -\newlabel{method-of-analysis}{{4}{11}{Method of analysis}{section.4}{}} -\@writefile{lot}{\contentsline {table}{\numberline {1}{\ignorespaces Branching ratios of the decaying q* particle.\relax }}{11}{table.1}\protected@file@percent } +\@writefile{toc}{\contentsline {section}{\numberline {4}Method of analysis}{12}{section.4}\protected@file@percent } +\newlabel{sec:moa}{{4}{12}{Method of analysis}{section.4}{}} \@writefile{toc}{\contentsline {subsection}{\numberline {4.1}Signal and Background modelling}{12}{subsection.4.1}\protected@file@percent } \newlabel{signal-and-background-modelling}{{4.1}{12}{Signal and Background modelling}{subsection.4.1}{}} -\BKM@entry{id=21,dest={73656374696F6E2E35},srcline={714},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030505C303030725C303030655C303030735C303030655C3030306C5C303030655C303030635C303030745C303030695C3030306F5C3030306E5C3030305C3034305C303030615C3030306E5C303030645C3030305C3034305C303030645C303030615C303030745C303030615C3030305C3034305C303030715C303030755C303030615C3030306C5C303030695C303030745C30303079} -\BKM@entry{id=22,dest={73756273656374696F6E2E352E31},srcline={727},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030505C303030725C303030655C303030735C303030655C3030306C5C303030655C303030635C303030745C303030695C3030306F5C3030306E} \@writefile{lof}{\contentsline {figure}{\numberline {6}{\ignorespaces Combined fit of signal and background on a toy dataset with gaussian errors and a simulated resonance mass of 3 TeV.\relax }}{13}{figure.caption.7}\protected@file@percent } \newlabel{fig:cb_fit}{{6}{13}{Combined fit of signal and background on a toy dataset with gaussian errors and a simulated resonance mass of 3 TeV.\relax }{figure.caption.7}{}} -\@writefile{toc}{\contentsline {section}{\numberline {5}Preselection and data quality}{13}{section.5}\protected@file@percent } -\newlabel{preselection-and-data-quality}{{5}{13}{Preselection and data quality}{section.5}{}} -\@writefile{toc}{\contentsline {subsection}{\numberline {5.1}Preselection}{13}{subsection.5.1}\protected@file@percent } -\newlabel{preselection}{{5.1}{13}{Preselection}{subsection.5.1}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces Number of jet distribution showing the cut at number of jets $\ge $ 2. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }}{14}{figure.caption.8}\protected@file@percent } -\newlabel{fig:njets}{{7}{14}{Number of jet distribution showing the cut at number of jets $\ge $ 2. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }{figure.caption.8}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces $\mitDelta \miteta $ distribution showing the cut at $\mitDelta \miteta \le 1.3$. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }}{15}{figure.caption.9}\protected@file@percent } -\newlabel{fig:deta}{{8}{15}{$\Delta \eta $ distribution showing the cut at $\Delta \eta \le 1.3$. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }{figure.caption.9}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {9}{\ignorespaces Invariant mass distribution showing the cut at $m_{jj} \ge \SI {1050}{\giga \eV }$. It shows the expected smooth falling functions of the background whereas the signal peaks at the simulated resonance mass. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }}{16}{figure.caption.10}\protected@file@percent } -\newlabel{fig:invmass}{{9}{16}{Invariant mass distribution showing the cut at $m_{jj} \ge \SI {1050}{\giga \eV }$. It shows the expected smooth falling functions of the background whereas the signal peaks at the simulated resonance mass. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }{figure.caption.10}{}} -\BKM@entry{id=23,dest={73756273656374696F6E2E352E32},srcline={836},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030445C303030615C303030745C303030615C3030305C3034305C3030302D5C3030305C3034305C3030304D5C3030306F5C3030306E5C303030745C303030655C3030305C3034305C303030435C303030615C303030725C3030306C5C3030306F5C3030305C3034305C303030435C3030306F5C3030306D5C303030705C303030615C303030725C303030695C303030735C3030306F5C3030306E} -\BKM@entry{id=24,dest={73756273756273656374696F6E2E352E322E31},srcline={877},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030695C303030645C303030655C303030625C303030615C3030306E5C30303064} -\@writefile{toc}{\contentsline {subsection}{\numberline {5.2}Data - Monte Carlo Comparison}{17}{subsection.5.2}\protected@file@percent } -\newlabel{data---monte-carlo-comparison}{{5.2}{17}{Data - Monte Carlo Comparison}{subsection.5.2}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {10}{\ignorespaces Comparision of data with the Monte Carlo simulation. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }}{17}{figure.caption.11}\protected@file@percent } -\newlabel{fig:data-mc}{{10}{17}{Comparision of data with the Monte Carlo simulation. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }{figure.caption.11}{}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.2.1}Sideband}{18}{subsubsection.5.2.1}\protected@file@percent } -\newlabel{sideband}{{5.2.1}{18}{Sideband}{subsubsection.5.2.1}{}} +\BKM@entry{id=21,dest={73656374696F6E2E35},srcline={768},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030505C303030725C303030655C303030735C303030655C3030306C5C303030655C303030635C303030745C303030695C3030306F5C3030306E5C3030305C3034305C303030615C3030306E5C303030645C3030305C3034305C303030645C303030615C303030745C303030615C3030305C3034305C303030715C303030755C303030615C3030306C5C303030695C303030745C30303079} +\BKM@entry{id=22,dest={73756273656374696F6E2E352E31},srcline={782},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030505C303030725C303030655C303030735C303030655C3030306C5C303030655C303030635C303030745C303030695C3030306F5C3030306E} +\@writefile{toc}{\contentsline {section}{\numberline {5}Preselection and data quality}{14}{section.5}\protected@file@percent } +\newlabel{preselection-and-data-quality}{{5}{14}{Preselection and data quality}{section.5}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.1}Preselection}{14}{subsection.5.1}\protected@file@percent } +\newlabel{preselection}{{5.1}{14}{Preselection}{subsection.5.1}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {7}{\ignorespaces Number of jet distribution showing the cut at number of jets $\ge $ 2. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }}{15}{figure.caption.8}\protected@file@percent } +\newlabel{fig:njets}{{7}{15}{Number of jet distribution showing the cut at number of jets $\ge $ 2. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }{figure.caption.8}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {8}{\ignorespaces $\mitDelta \miteta $ distribution showing the cut at $\mitDelta \miteta \le 1.3$. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }}{16}{figure.caption.9}\protected@file@percent } +\newlabel{fig:deta}{{8}{16}{$\Delta \eta $ distribution showing the cut at $\Delta \eta \le 1.3$. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018. The signal curves are amplified by a factor of 10,000, to be visible.\relax }{figure.caption.9}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {9}{\ignorespaces Invariant mass distribution showing the cut at $m_{jj} \ge \SI {1050}{\giga \eV }$. It shows the expected smooth falling functions of the background whereas the signal peaks at the simulated resonance mass. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }}{17}{figure.caption.10}\protected@file@percent } +\newlabel{fig:invmass}{{9}{17}{Invariant mass distribution showing the cut at $m_{jj} \ge \SI {1050}{\giga \eV }$. It shows the expected smooth falling functions of the background whereas the signal peaks at the simulated resonance mass. Left: distribution before the cut. Right: distribution after the cut. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }{figure.caption.10}{}} +\BKM@entry{id=23,dest={73756273656374696F6E2E352E32},srcline={898},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030445C303030615C303030745C303030615C3030305C3034305C3030302D5C3030305C3034305C3030304D5C3030306F5C3030306E5C303030745C303030655C3030305C3034305C303030435C303030615C303030725C3030306C5C3030306F5C3030305C3034305C303030435C3030306F5C3030306D5C303030705C303030615C303030725C303030695C303030735C3030306F5C3030306E} +\BKM@entry{id=24,dest={73756273756273656374696F6E2E352E322E31},srcline={944},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030695C303030645C303030655C303030625C303030615C3030306E5C30303064} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.2}Data - Monte Carlo Comparison}{18}{subsection.5.2}\protected@file@percent } +\newlabel{data---monte-carlo-comparison}{{5.2}{18}{Data - Monte Carlo Comparison}{subsection.5.2}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {10}{\ignorespaces Comparision of data with the Monte Carlo simulation. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }}{18}{figure.caption.11}\protected@file@percent } +\newlabel{fig:data-mc}{{10}{18}{Comparision of data with the Monte Carlo simulation. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }{figure.caption.11}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.2.1}Sideband}{19}{subsubsection.5.2.1}\protected@file@percent } +\newlabel{sideband}{{5.2.1}{19}{Sideband}{subsubsection.5.2.1}{}} \@writefile{lof}{\contentsline {figure}{\numberline {11}{\ignorespaces Comparison of data with the Monte Carlo simulation in the sideband region. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }}{19}{figure.caption.12}\protected@file@percent } \newlabel{fig:sideband}{{11}{19}{Comparison of data with the Monte Carlo simulation in the sideband region. 1st row: data from 2016. 2nd row: combined data from 2016, 2017 and 2018.\relax }{figure.caption.12}{}} -\BKM@entry{id=25,dest={73656374696F6E2E36},srcline={918},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304A5C303030655C303030745C3030305C3034305C303030735C303030755C303030625C303030735C303030745C303030725C303030755C303030635C303030745C303030755C303030725C303030655C3030305C3034305C303030735C303030655C3030306C5C303030655C303030635C303030745C303030695C3030306F5C3030306E} -\BKM@entry{id=26,dest={73756273656374696F6E2E362E31},srcline={938},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304E5C3030302D5C303030535C303030755C303030625C3030306A5C303030655C303030745C303030745C303030695C3030306E5C303030655C303030735C30303073} -\BKM@entry{id=27,dest={73756273656374696F6E2E362E32},srcline={967},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030445C303030655C303030655C303030705C303030415C3030304B5C30303038} +\BKM@entry{id=25,dest={73656374696F6E2E36},srcline={982},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304A5C303030655C303030745C3030305C3034305C303030735C303030755C303030625C303030735C303030745C303030725C303030755C303030635C303030745C303030755C303030725C303030655C3030305C3034305C303030735C303030655C3030306C5C303030655C303030635C303030745C303030695C3030306F5C3030306E} +\BKM@entry{id=26,dest={73756273656374696F6E2E362E31},srcline={1007},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304E5C3030302D5C303030535C303030755C303030625C3030306A5C303030655C303030745C303030745C303030695C3030306E5C303030655C303030735C30303073} +\BKM@entry{id=27,dest={73756273656374696F6E2E362E32},srcline={1040},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030445C303030655C303030655C303030705C303030415C3030304B5C30303038} \@writefile{toc}{\contentsline {section}{\numberline {6}Jet substructure selection}{20}{section.6}\protected@file@percent } \newlabel{jet-substructure-selection}{{6}{20}{Jet substructure selection}{section.6}{}} \@writefile{toc}{\contentsline {subsection}{\numberline {6.1}N-Subjettiness}{20}{subsection.6.1}\protected@file@percent } \newlabel{n-subjettiness}{{6.1}{20}{N-Subjettiness}{subsection.6.1}{}} -\@writefile{toc}{\contentsline {subsection}{\numberline {6.2}DeepAK8}{20}{subsection.6.2}\protected@file@percent } -\newlabel{deepak8}{{6.2}{20}{DeepAK8}{subsection.6.2}{}} -\BKM@entry{id=28,dest={73756273656374696F6E2E362E33},srcline={1002},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304F5C303030705C303030745C303030695C3030306D5C303030695C3030307A5C303030615C303030745C303030695C3030306F5C3030306E} +\BKM@entry{id=28,dest={73756273656374696F6E2E362E33},srcline={1078},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C3030304F5C303030705C303030745C303030695C3030306D5C303030695C3030307A5C303030615C303030745C303030695C3030306F5C3030306E} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.2}DeepAK8}{21}{subsection.6.2}\protected@file@percent } +\newlabel{deepak8}{{6.2}{21}{DeepAK8}{subsection.6.2}{}} \@writefile{toc}{\contentsline {subsection}{\numberline {6.3}Optimization}{21}{subsection.6.3}\protected@file@percent } -\newlabel{optimization}{{6.3}{21}{Optimization}{subsection.6.3}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {12}{\ignorespaces Significance plots for the deep boosted (left) and N-subjettiness (right) tagger at the 2 TeV masspoint.\relax }}{21}{figure.caption.13}\protected@file@percent } -\newlabel{fig:sig}{{12}{21}{Significance plots for the deep boosted (left) and N-subjettiness (right) tagger at the 2 TeV masspoint.\relax }{figure.caption.13}{}} -\BKM@entry{id=29,dest={73656374696F6E2E37},srcline={1046},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030695C303030675C3030306E5C303030615C3030306C5C3030305C3034305C303030655C303030785C303030745C303030725C303030615C303030635C303030745C303030695C3030306F5C3030306E} -\BKM@entry{id=30,dest={73756273656374696F6E2E372E31},srcline={1063},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030555C3030306E5C303030635C303030655C303030725C303030745C303030615C303030695C3030306E5C303030745C303030695C303030655C30303073} -\BKM@entry{id=31,dest={73656374696F6E2E38},srcline={1085},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030525C303030655C303030735C303030755C3030306C5C303030745C30303073} -\BKM@entry{id=32,dest={73756273656374696F6E2E382E31},srcline={1092},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030325C303030305C303030315C30303036} +\newlabel{sec:opt}{{6.3}{21}{Optimization}{subsection.6.3}{}} +\BKM@entry{id=29,dest={73656374696F6E2E37},srcline={1128},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030695C303030675C3030306E5C303030615C3030306C5C3030305C3034305C303030655C303030785C303030745C303030725C303030615C303030635C303030745C303030695C3030306F5C3030306E} +\BKM@entry{id=30,dest={73756273656374696F6E2E372E31},srcline={1152},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030555C3030306E5C303030635C303030655C303030725C303030745C303030615C303030695C3030306E5C303030745C303030695C303030655C30303073} +\@writefile{lof}{\contentsline {figure}{\numberline {12}{\ignorespaces Significance plots for the deep boosted (left) and N-subjettiness (right) tagger at the 2 TeV masspoint.\relax }}{22}{figure.caption.13}\protected@file@percent } +\newlabel{fig:sig}{{12}{22}{Significance plots for the deep boosted (left) and N-subjettiness (right) tagger at the 2 TeV masspoint.\relax }{figure.caption.13}{}} \@writefile{toc}{\contentsline {section}{\numberline {7}Signal extraction}{22}{section.7}\protected@file@percent } -\newlabel{signal-extraction}{{7}{22}{Signal extraction}{section.7}{}} -\@writefile{toc}{\contentsline {subsection}{\numberline {7.1}Uncertainties}{22}{subsection.7.1}\protected@file@percent } -\newlabel{uncertainties}{{7.1}{22}{Uncertainties}{subsection.7.1}{}} -\@writefile{toc}{\contentsline {section}{\numberline {8}Results}{22}{section.8}\protected@file@percent } -\newlabel{results}{{8}{22}{Results}{section.8}{}} +\newlabel{sec:extr}{{7}{22}{Signal extraction}{section.7}{}} +\BKM@entry{id=31,dest={73656374696F6E2E38},srcline={1180},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030525C303030655C303030735C303030755C3030306C5C303030745C30303073} +\abx@aux@segm{0}{0}{PREV_RESEARCH} +\BKM@entry{id=32,dest={73756273656374696F6E2E382E31},srcline={1188},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030325C303030305C303030315C30303036} +\@writefile{toc}{\contentsline {subsection}{\numberline {7.1}Uncertainties}{23}{subsection.7.1}\protected@file@percent } +\newlabel{uncertainties}{{7.1}{23}{Uncertainties}{subsection.7.1}{}} +\@writefile{toc}{\contentsline {section}{\numberline {8}Results}{23}{section.8}\protected@file@percent } +\newlabel{results}{{8}{23}{Results}{section.8}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {8.1}2016}{23}{subsection.8.1}\protected@file@percent } +\newlabel{section}{{8.1}{23}{2016}{subsection.8.1}{}} \gdef \LT@ii {\LT@entry - {1}{63.6504pt}\LT@entry - {1}{83.9922pt}\LT@entry - {1}{90.6504pt}\LT@entry - {1}{91.98048pt}\LT@entry - {1}{77.99806pt}} + {1}{36.64455pt}\LT@entry + {3}{74.98244pt}\LT@entry + {1}{70.98048pt}\LT@entry + {1}{103.96877pt}\LT@entry + {1}{99.29884pt}} +\BKM@entry{id=33,dest={73756273756273656374696F6E2E382E312E31},srcline={1249},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030505C303030725C303030655C303030765C303030695C3030306F5C303030755C303030735C3030305C3034305C303030725C303030655C303030735C303030655C303030615C303030725C303030635C30303068} +\newlabel{tbl:res2016}{{2}{24}{2016}{table.2}{}} +\@writefile{lot}{\contentsline {table}{\numberline {2}{\ignorespaces Mass limits found using the data collected in 2016\relax }}{24}{table.2}\protected@file@percent } +\@writefile{lof}{\contentsline {figure}{\numberline {13}{\ignorespaces Results of the cross section limits for 2016 using the $\mittau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }}{24}{figure.caption.14}\protected@file@percent } +\newlabel{fig:res2016}{{13}{24}{Results of the cross section limits for 2016 using the $\tau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }{figure.caption.14}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {8.1.1}Previous research}{24}{subsubsection.8.1.1}\protected@file@percent } +\newlabel{previous-research}{{8.1.1}{24}{Previous research}{subsubsection.8.1.1}{}} +\abx@aux@segm{0}{0}{PREV_RESEARCH} +\abx@aux@segm{0}{0}{PREV_RESEARCH} +\BKM@entry{id=34,dest={73756273656374696F6E2E382E32},srcline={1280},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306D5C303030625C303030695C3030306E5C303030655C303030645C3030305C3034305C303030645C303030615C303030745C303030615C303030735C303030655C30303074} \gdef \LT@iii {\LT@entry - {1}{63.6504pt}\LT@entry - {1}{83.9922pt}\LT@entry - {1}{90.6504pt}\LT@entry - {1}{91.98048pt}\LT@entry - {1}{77.99806pt}} + {1}{36.64455pt}\LT@entry + {3}{74.98244pt}\LT@entry + {1}{70.98048pt}\LT@entry + {1}{103.96877pt}\LT@entry + {1}{99.29884pt}} +\abx@aux@segm{0}{0}{PREV_RESEARCH} +\BKM@entry{id=35,dest={73756273656374696F6E2E382E33},srcline={1335},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306D5C303030705C303030615C303030725C303030695C303030735C3030306F5C3030306E5C3030305C3034305C3030306F5C303030665C3030305C3034305C303030745C303030615C303030675C303030675C303030655C303030725C30303073} +\@writefile{lof}{\contentsline {figure}{\numberline {14}{\ignorespaces Previous results of the cross section limits for q\discretionary {\tmspace +\thinmuskip {.1667em}\TU/latinmodern-math.otf(2)/m/n/12 \char 2}{}{} decaying to qW (left) and q\discretionary {\tmspace +\thinmuskip {.1667em}\TU/latinmodern-math.otf(2)/m/n/12 \char 2}{}{} decaying to qZ (right). Taken from \cite {PREV_RESEARCH}.\relax }}{25}{figure.caption.15}\protected@file@percent } +\newlabel{fig:prev}{{14}{25}{Previous results of the cross section limits for q\* decaying to qW (left) and q\* decaying to qZ (right). Taken from \cite {PREV_RESEARCH}.\relax }{figure.caption.15}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {8.2}Combined dataset}{25}{subsection.8.2}\protected@file@percent } +\newlabel{combined-dataset}{{8.2}{25}{Combined dataset}{subsection.8.2}{}} +\@writefile{lot}{\contentsline {table}{\numberline {3}{\ignorespaces Mass limits found using the data collected in 2016 - 2018\relax }}{25}{table.3}\protected@file@percent } +\@writefile{toc}{\contentsline {subsection}{\numberline {8.3}Comparison of taggers}{25}{subsection.8.3}\protected@file@percent } +\newlabel{comparison-of-taggers}{{8.3}{25}{Comparison of taggers}{subsection.8.3}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {15}{\ignorespaces Results of the cross section limits for the three combined years using the $\mittau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }}{26}{figure.caption.16}\protected@file@percent } +\newlabel{fig:resCombined}{{15}{26}{Results of the cross section limits for the three combined years using the $\tau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }{figure.caption.16}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {16}{\ignorespaces Comparision of deep boosted and N-subjettiness tagger in the high purity category using the data from year 2018.\relax }}{27}{figure.caption.17}\protected@file@percent } +\newlabel{fig:comp_2018}{{16}{27}{Comparision of deep boosted and N-subjettiness tagger in the high purity category using the data from year 2018.\relax }{figure.caption.17}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {17}{\ignorespaces Comparison of expected limits of the different taggers using different datasets. Left: decay to qW. Right: decay to qZ\relax }}{28}{figure.caption.18}\protected@file@percent } +\newlabel{fig:limit_comp}{{17}{28}{Comparison of expected limits of the different taggers using different datasets. Left: decay to qW. Right: decay to qZ\relax }{figure.caption.18}{}} +\BKM@entry{id=36,dest={73656374696F6E2E39},srcline={1386},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030755C3030306D5C3030306D5C303030615C303030725C30303079} +\@writefile{toc}{\contentsline {section}{\numberline {9}Summary}{29}{section.9}\protected@file@percent } +\newlabel{summary}{{9}{29}{Summary}{section.9}{}} \gdef \LT@iv {\LT@entry {1}{63.6504pt}\LT@entry {1}{83.9922pt}\LT@entry {1}{90.6504pt}\LT@entry {1}{91.98048pt}\LT@entry {1}{77.99806pt}} -\@writefile{toc}{\contentsline {subsection}{\numberline {8.1}2016}{23}{subsection.8.1}\protected@file@percent } -\newlabel{section}{{8.1}{23}{2016}{subsection.8.1}{}} -\@writefile{lot}{\contentsline {table}{\numberline {2}{\ignorespaces Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qW\relax }}{23}{table.2}\protected@file@percent } -\@writefile{lot}{\contentsline {table}{\numberline {3}{\ignorespaces Cross Section limits using 2016 data and the deep boosted tagger for the decay to qW\relax }}{23}{table.3}\protected@file@percent } -\@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qZ\relax }}{23}{table.4}\protected@file@percent } \gdef \LT@v {\LT@entry {1}{63.6504pt}\LT@entry {1}{83.9922pt}\LT@entry @@ -185,70 +217,49 @@ {1}{91.98048pt}\LT@entry {1}{77.99806pt}} \gdef \LT@vi {\LT@entry - {1}{36.64455pt}\LT@entry - {3}{74.98244pt}\LT@entry - {1}{70.98048pt}\LT@entry - {1}{103.96877pt}\LT@entry - {1}{99.29884pt}} -\BKM@entry{id=33,dest={73756273756273656374696F6E2E382E312E31},srcline={1250},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030505C303030725C303030655C303030765C303030695C3030306F5C303030755C303030735C3030305C3034305C303030725C303030655C303030735C303030655C303030615C303030725C303030635C30303068} -\@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces Cross Section limits using 2016 data and deep boosted tagger for the decay to qZ\relax }}{24}{table.5}\protected@file@percent } -\@writefile{lot}{\contentsline {table}{\numberline {6}{\ignorespaces Mass limits found using the data collected in 2016\relax }}{24}{table.6}\protected@file@percent } -\@writefile{lof}{\contentsline {figure}{\numberline {13}{\ignorespaces Results of the cross section limits for 2016 using the $\mittau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }}{25}{figure.caption.14}\protected@file@percent } -\newlabel{fig:res2016}{{13}{25}{Results of the cross section limits for 2016 using the $\tau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }{figure.caption.14}{}} -\abx@aux@segm{0}{0}{PREV_RESEARCH} -\abx@aux@segm{0}{0}{PREV_RESEARCH} -\BKM@entry{id=34,dest={73756273656374696F6E2E382E32},srcline={1275},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030325C303030305C303030315C303030365C3030305C3034305C3030302B5C3030305C3034305C303030325C303030305C303030315C303030375C3030305C3034305C3030302B5C3030305C3034305C303030325C303030305C303030315C30303038} + {1}{63.6504pt}\LT@entry + {1}{83.9922pt}\LT@entry + {1}{90.6504pt}\LT@entry + {1}{91.98048pt}\LT@entry + {1}{77.99806pt}} +\newlabel{appendix}{{9}{32}{Appendix}{section*.20}{}} +\@writefile{lot}{\contentsline {table}{\numberline {4}{\ignorespaces Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qW\relax }}{32}{table.4}\protected@file@percent } +\@writefile{lot}{\contentsline {table}{\numberline {5}{\ignorespaces Cross Section limits using 2016 data and the deep boosted tagger for the decay to qW\relax }}{32}{table.5}\protected@file@percent } +\@writefile{lot}{\contentsline {table}{\numberline {6}{\ignorespaces Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qZ\relax }}{32}{table.6}\protected@file@percent } \gdef \LT@vii {\LT@entry {1}{63.6504pt}\LT@entry {1}{83.9922pt}\LT@entry {1}{90.6504pt}\LT@entry {1}{91.98048pt}\LT@entry {1}{77.99806pt}} -\@writefile{toc}{\contentsline {subsubsection}{\numberline {8.1.1}Previous research}{26}{subsubsection.8.1.1}\protected@file@percent } -\newlabel{previous-research}{{8.1.1}{26}{Previous research}{subsubsection.8.1.1}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {14}{\ignorespaces Previous results of the cross section limits for q\discretionary {\tmspace +\thinmuskip {.1667em}\TU/latinmodern-math.otf(2)/m/n/12 \char 2}{}{} decaying to qW (left) and q\discretionary {\tmspace +\thinmuskip {.1667em}\TU/latinmodern-math.otf(2)/m/n/12 \char 2}{}{} decaying to qZ (right). Taken from \cite {PREV_RESEARCH}.\relax }}{26}{figure.caption.15}\protected@file@percent } -\newlabel{fig:prev}{{14}{26}{Previous results of the cross section limits for q\* decaying to qW (left) and q\* decaying to qZ (right). Taken from \cite {PREV_RESEARCH}.\relax }{figure.caption.15}{}} -\@writefile{toc}{\contentsline {subsection}{\numberline {8.2}2016 + 2017 + 2018}{26}{subsection.8.2}\protected@file@percent } -\newlabel{section-1}{{8.2}{26}{2016 + 2017 + 2018}{subsection.8.2}{}} -\@writefile{lot}{\contentsline {table}{\numberline {7}{\ignorespaces Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qW\relax }}{26}{table.7}\protected@file@percent } \gdef \LT@viii {\LT@entry {1}{63.6504pt}\LT@entry {1}{83.9922pt}\LT@entry {1}{90.6504pt}\LT@entry {1}{91.98048pt}\LT@entry {1}{77.99806pt}} +\@writefile{lot}{\contentsline {table}{\numberline {7}{\ignorespaces Cross Section limits using 2016 data and deep boosted tagger for the decay to qZ\relax }}{33}{table.7}\protected@file@percent } +\@writefile{lot}{\contentsline {table}{\numberline {8}{\ignorespaces Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qW\relax }}{33}{table.8}\protected@file@percent } \gdef \LT@ix {\LT@entry {1}{63.6504pt}\LT@entry {1}{83.9922pt}\LT@entry {1}{90.6504pt}\LT@entry {1}{91.98048pt}\LT@entry {1}{77.99806pt}} -\@writefile{lot}{\contentsline {table}{\numberline {8}{\ignorespaces Cross Section limits using the combined data and the deep boosted tagger for the decay to qW\relax }}{27}{table.8}\protected@file@percent } -\@writefile{lot}{\contentsline {table}{\numberline {9}{\ignorespaces Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qZ\relax }}{27}{table.9}\protected@file@percent } \gdef \LT@x {\LT@entry {1}{63.6504pt}\LT@entry {1}{83.9922pt}\LT@entry {1}{90.6504pt}\LT@entry {1}{91.98048pt}\LT@entry {1}{77.99806pt}} +\@writefile{lot}{\contentsline {table}{\numberline {9}{\ignorespaces Cross Section limits using the combined data and the deep boosted tagger for the decay to qW\relax }}{34}{table.9}\protected@file@percent } +\@writefile{lot}{\contentsline {table}{\numberline {10}{\ignorespaces Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qZ\relax }}{34}{table.10}\protected@file@percent } \gdef \LT@xi {\LT@entry - {1}{36.64455pt}\LT@entry - {3}{74.98244pt}\LT@entry - {1}{70.98048pt}\LT@entry - {1}{103.96877pt}\LT@entry - {1}{99.29884pt}} -\BKM@entry{id=35,dest={73756273656374696F6E2E382E33},srcline={1434},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030435C3030306F5C3030306D5C303030705C303030615C303030725C303030695C303030735C3030306F5C3030306E5C3030305C3034305C3030306F5C303030665C3030305C3034305C303030745C303030615C303030675C303030675C303030655C303030725C30303073} -\@writefile{lot}{\contentsline {table}{\numberline {10}{\ignorespaces Cross Section limits using the combined data and deep boosted tagger for the decay to qZ\relax }}{28}{table.10}\protected@file@percent } -\@writefile{lot}{\contentsline {table}{\numberline {11}{\ignorespaces Mass limits found using the data collected in 2016 - 2018\relax }}{28}{table.11}\protected@file@percent } -\@writefile{toc}{\contentsline {subsection}{\numberline {8.3}Comparison of taggers}{28}{subsection.8.3}\protected@file@percent } -\newlabel{comparison-of-taggers}{{8.3}{28}{Comparison of taggers}{subsection.8.3}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {15}{\ignorespaces Results of the cross section limits for the three combined years using the $\mittau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }}{29}{figure.caption.16}\protected@file@percent } -\newlabel{fig:resCombined}{{15}{29}{Results of the cross section limits for the three combined years using the $\tau _{21}$ tagger (left) and the deep boosted tagger (right).\relax }{figure.caption.16}{}} -\@writefile{lof}{\contentsline {figure}{\numberline {16}{\ignorespaces Comparison of expected limits of the different taggers using different datasets. Left: decay to qW. Right: decay to qZ\relax }}{29}{figure.caption.17}\protected@file@percent } -\newlabel{fig:limit_comp}{{16}{29}{Comparison of expected limits of the different taggers using different datasets. Left: decay to qW. Right: decay to qZ\relax }{figure.caption.17}{}} -\BKM@entry{id=36,dest={73656374696F6E2E39},srcline={1463},srcfile={2E2F7468657369732E746578}}{5C3337365C3337375C303030535C303030755C3030306D5C3030306D5C303030615C303030725C30303079} -\@writefile{toc}{\contentsline {section}{\numberline {9}Summary}{30}{section.9}\protected@file@percent } -\newlabel{summary}{{9}{30}{Summary}{section.9}{}} + {1}{63.6504pt}\LT@entry + {1}{83.9922pt}\LT@entry + {1}{90.6504pt}\LT@entry + {1}{91.98048pt}\LT@entry + {1}{77.99806pt}} \abx@aux@refcontextdefaultsdone \abx@aux@defaultrefcontext{0}{LHC}{nty/global//global/global} \abx@aux@defaultrefcontext{0}{QSTAR_THEORY}{nty/global//global/global} @@ -265,3 +276,4 @@ \abx@aux@defaultrefcontext{0}{TAU21_TAGGER}{nty/global//global/global} \abx@aux@defaultrefcontext{0}{CMS_TRIGGER}{nty/global//global/global} \abx@aux@defaultrefcontext{0}{HADRONIZATION}{nty/global//global/global} +\@writefile{lot}{\contentsline {table}{\numberline {11}{\ignorespaces Cross Section limits using the combined data and deep boosted tagger for the decay to qZ\relax }}{35}{table.11}\protected@file@percent } diff --git a/thesis.bcf b/thesis.bcf index e4969c0..e59acb5 100644 --- a/thesis.bcf +++ b/thesis.bcf @@ -1995,12 +1995,19 @@ bibliography.bib - website + PREV_RESEARCH QSTAR_THEORY PREV_RESEARCH - QSTAR_THEORY - PREV_RESEARCH - PREV_RESEARCH + website + ANTIKT + ANTIKT + PREV_RESEARCH + PREV_RESEARCH + QSTAR_THEORY + PREV_RESEARCH + PREV_RESEARCH + PREV_RESEARCH + PREV_RESEARCH * diff --git a/thesis.blg b/thesis.blg index 23324b5..8b41d2e 100644 --- a/thesis.blg +++ b/thesis.blg @@ -1,28 +1,28 @@ [0] Config.pm:304> INFO - This is Biber 2.12 [0] Config.pm:307> INFO - Logfile is 'thesis.blg' -[20] biber:315> INFO - === Mi Okt 16, 2019, 13:44:43 -[37] Biber.pm:371> INFO - Reading 'thesis.bcf' -[91] Biber.pm:886> INFO - Using all citekeys in bib section 0 -[102] Biber.pm:4093> INFO - Processing section 0 -[111] Biber.pm:4254> INFO - Looking for bibtex format file 'bibliography.bib' for section 0 -[115] bibtex.pm:1523> INFO - LaTeX decoding ... -[178] bibtex.pm:1340> INFO - Found BibTeX data source 'bibliography.bib' -[183] Utils.pm:193> WARN - month field 'Nov' in entry 'HADRONIZATION' is not an integer - this will probably not sort properly. -[194] Utils.pm:193> WARN - month field 'Apr' in entry 'ANTIKT' is not an integer - this will probably not sort properly. -[2543] Utils.pm:193> WARN - month field 'aug' in entry 'LHC_MACHINE' is not an integer - this will probably not sort properly. -[2551] Utils.pm:193> WARN - month field 'Mar' in entry 'TAU21_TAGGER' is not an integer - this will probably not sort properly. -[2813] Utils.pm:193> WARN - month field 'Aug' in entry 'PARTICLE_PHYSICS' is not an integer - this will probably not sort properly. -[2816] Utils.pm:193> WARN - month field 'Jan' in entry 'PARTICLE_FLOW' is not an integer - this will probably not sort properly. -[2823] Utils.pm:193> WARN - month field 'May' in entry 'SDM' is not an integer - this will probably not sort properly. -[2829] Utils.pm:193> WARN - month field 'May' in entry 'LHC' is not an integer - this will probably not sort properly. -[2832] Utils.pm:193> WARN - month field 'Aug' in entry 'PREV_RESEARCH' is not an integer - this will probably not sort properly. -[2837] Utils.pm:193> WARN - month field 'Oct' in entry 'SUC_COMBINATION' is not an integer - this will probably not sort properly. -[2840] Utils.pm:193> WARN - month field 'Apr' in entry 'MONTECARLO' is not an integer - this will probably not sort properly. -[2844] Utils.pm:193> WARN - month field 'Oct' in entry 'CMS_TRIGGER' is not an integer - this will probably not sort properly. -[2911] UCollate.pm:68> INFO - Overriding locale 'en-GB' defaults 'variable = shifted' with 'variable = non-ignorable' -[2911] UCollate.pm:68> INFO - Overriding locale 'en-GB' defaults 'normalization = NFD' with 'normalization = prenormalized' -[2912] Biber.pm:3921> INFO - Sorting list 'nty/global//global/global' of type 'entry' with template 'nty' and locale 'en-GB' -[2912] Biber.pm:3927> INFO - No sort tailoring available for locale 'en-GB' -[2992] bbl.pm:636> INFO - Writing 'thesis.bbl' with encoding 'UTF-8' -[6640] bbl.pm:739> INFO - Output to thesis.bbl -[6642] Biber.pm:110> INFO - WARNINGS: 12 +[20] biber:315> INFO - === Fr Okt 25, 2019, 10:00:15 +[39] Biber.pm:371> INFO - Reading 'thesis.bcf' +[92] Biber.pm:886> INFO - Using all citekeys in bib section 0 +[105] Biber.pm:4093> INFO - Processing section 0 +[114] Biber.pm:4254> INFO - Looking for bibtex format file 'bibliography.bib' for section 0 +[118] bibtex.pm:1523> INFO - LaTeX decoding ... +[185] bibtex.pm:1340> INFO - Found BibTeX data source 'bibliography.bib' +[2634] Utils.pm:193> WARN - month field 'Nov' in entry 'HADRONIZATION' is not an integer - this will probably not sort properly. +[2638] Utils.pm:193> WARN - month field 'Aug' in entry 'PREV_RESEARCH' is not an integer - this will probably not sort properly. +[2645] Utils.pm:193> WARN - month field 'May' in entry 'SDM' is not an integer - this will probably not sort properly. +[2936] Utils.pm:193> WARN - month field 'Aug' in entry 'PARTICLE_PHYSICS' is not an integer - this will probably not sort properly. +[2939] Utils.pm:193> WARN - month field 'Apr' in entry 'MONTECARLO' is not an integer - this will probably not sort properly. +[2944] Utils.pm:193> WARN - month field 'Oct' in entry 'SUC_COMBINATION' is not an integer - this will probably not sort properly. +[2950] Utils.pm:193> WARN - month field 'Apr' in entry 'ANTIKT' is not an integer - this will probably not sort properly. +[2953] Utils.pm:193> WARN - month field 'Oct' in entry 'CMS_TRIGGER' is not an integer - this will probably not sort properly. +[2957] Utils.pm:193> WARN - month field 'Mar' in entry 'TAU21_TAGGER' is not an integer - this will probably not sort properly. +[2958] Utils.pm:193> WARN - month field 'aug' in entry 'LHC_MACHINE' is not an integer - this will probably not sort properly. +[2970] Utils.pm:193> WARN - month field 'May' in entry 'LHC' is not an integer - this will probably not sort properly. +[2973] Utils.pm:193> WARN - month field 'Jan' in entry 'PARTICLE_FLOW' is not an integer - this will probably not sort properly. +[3046] UCollate.pm:68> INFO - Overriding locale 'en-GB' defaults 'variable = shifted' with 'variable = non-ignorable' +[3046] UCollate.pm:68> INFO - Overriding locale 'en-GB' defaults 'normalization = NFD' with 'normalization = prenormalized' +[3046] Biber.pm:3921> INFO - Sorting list 'nty/global//global/global' of type 'entry' with template 'nty' and locale 'en-GB' +[3047] Biber.pm:3927> INFO - No sort tailoring available for locale 'en-GB' +[3132] bbl.pm:636> INFO - Writing 'thesis.bbl' with encoding 'UTF-8' +[7189] bbl.pm:739> INFO - Output to thesis.bbl +[7190] Biber.pm:110> INFO - WARNINGS: 12 diff --git a/thesis.log b/thesis.log index 71902d7..aa6676b 100644 --- a/thesis.log +++ b/thesis.log @@ -1,4 +1,4 @@ -This is LuaTeX, Version 1.10.0 (TeX Live 2019/Arch Linux) (format=lualatex 2019.8.29) 16 OCT 2019 13:44 +This is LuaTeX, Version 1.10.0 (TeX Live 2019/Arch Linux) (format=lualatex 2019.8.29) 25 OCT 2019 10:00 restricted system commands enabled. **thesis (./thesis.tex @@ -48,7 +48,7 @@ Inserting `luaotfload.aux.set_capheight' at position 3 in `luaotfload.patch_font '. Inserting `luaotfload.rewrite_fontname' at position 4 in `luaotfload.patch_font' . -luaotfload | main : initialization completed in 0.054 seconds +luaotfload | main : initialization completed in 0.052 seconds (/usr/share/texmf-dist/tex/latex/base/article.cls Document Class: article 2018/09/03 v1.4i Standard LaTeX document class (/usr/share/texmf-dist/tex/latex/base/size12.clo @@ -1691,34 +1691,34 @@ File: english.lbx 2018/11/02 v3.12 biblatex localization (PK/MW) )) (./thesis.aux) \openout1 = thesis.aux -LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for TU/lmr/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. -LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 116. -LaTeX Font Info: ... okay on input line 116. +LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for TU/lmr/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. +LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 115. +LaTeX Font Info: ... okay on input line 115. LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal' -(Font) OT1/lmss/m/n --> TU/lmss/m/n on input line 116. +(Font) OT1/lmss/m/n --> TU/lmss/m/n on input line 115. LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' -(Font) OT1/lmss/bx/n --> TU/lmss/bx/n on input line 116. +(Font) OT1/lmss/bx/n --> TU/lmss/bx/n on input line 115. LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal' -(Font) OT1/lmtt/m/n --> TU/lmtt/m/n on input line 116. +(Font) OT1/lmtt/m/n --> TU/lmtt/m/n on input line 115. LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' -(Font) OT1/lmtt/m/n --> TU/lmtt/bx/n on input line 116. +(Font) OT1/lmtt/m/n --> TU/lmtt/bx/n on input line 115. Package fontspec Info: latinmodern-math scale = 1.037739856970899. @@ -1753,7 +1753,7 @@ imens: (fontspec) \__um_luatex_copy_fontdimens: LaTeX Font Info: Font shape `TU/latinmodern-math.otf(0)/m/n' will be -(Font) scaled to size 12.45282pt on input line 116. +(Font) scaled to size 12.45282pt on input line 115. Package fontspec Info: latinmodern-math scale = 1.037739856970899. @@ -1797,17 +1797,17 @@ h.otf]:mode=base;script=math;language=DFLT;+ssty=0;"<-7.2>s*[1.037739856970899]" (fontspec) \__um_luatex_copy_fontdimens: LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be -(Font) scaled to size 12.45282pt on input line 116. +(Font) scaled to size 12.45282pt on input line 115. LaTeX Font Info: Encoding `OT1' has changed to `TU' for symbol font -(Font) `operators' in the math version `normal' on input line 116. +(Font) `operators' in the math version `normal' on input line 115. LaTeX Font Info: Overwriting symbol font `operators' in version `normal' (Font) OT1/lmr/m/n --> TU/latinmodern-math.otf(1)/m/n on input -line 116. +line 115. LaTeX Font Info: Encoding `OT1' has changed to `TU' for symbol font -(Font) `operators' in the math version `bold' on input line 116. +(Font) `operators' in the math version `bold' on input line 115. LaTeX Font Info: Overwriting symbol font `operators' in version `bold' (Font) OT1/lmr/bx/n --> TU/latinmodern-math.otf(1)/bx/n on inpu -t line 116. +t line 115. Package fontspec Info: latinmodern-math scale = 1.037739856970899. @@ -1897,15 +1897,15 @@ h.otf]:mode=base;script=math;language=DFLT;+ssty=0;"<-7.2>s*[1.037843630956596]" (fontspec) \fontdimen 21\font =0pt\relax LaTeX Font Info: Encoding `OMS' has changed to `TU' for symbol font -(Font) `symbols' in the math version `normal' on input line 116. +(Font) `symbols' in the math version `normal' on input line 115. LaTeX Font Info: Overwriting symbol font `symbols' in version `normal' (Font) OMS/lmsy/m/n --> TU/latinmodern-math.otf(2)/m/n on input - line 116. + line 115. LaTeX Font Info: Encoding `OMS' has changed to `TU' for symbol font -(Font) `symbols' in the math version `bold' on input line 116. +(Font) `symbols' in the math version `bold' on input line 115. LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' (Font) OMS/lmsy/b/n --> TU/latinmodern-math.otf(2)/bx/n on inpu -t line 116. +t line 115. Package fontspec Info: latinmodern-math scale = 1.037739856970899. @@ -1978,48 +1978,48 @@ h.otf]:mode=base;script=math;language=DFLT;+ssty=0;"<-7.2>s*[1.037636082985202]" LaTeX Font Info: Encoding `OMX' has changed to `TU' for symbol font (Font) `largesymbols' in the math version `normal' on input line 11 -6. +5. LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal' (Font) OMX/lmex/m/n --> TU/latinmodern-math.otf(3)/m/n on input - line 116. + line 115. LaTeX Font Info: Encoding `OMX' has changed to `TU' for symbol font -(Font) `largesymbols' in the math version `bold' on input line 116. +(Font) `largesymbols' in the math version `bold' on input line 115. LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' (Font) OMX/lmex/m/n --> TU/latinmodern-math.otf(3)/bx/n on inpu -t line 116. +t line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be -(Font) scaled to size 8.71696pt on input line 116. +(Font) scaled to size 8.71696pt on input line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be -(Font) scaled to size 6.22641pt on input line 116. -LaTeX Font Info: Try loading font information for OML+lmm on input line 116. +(Font) scaled to size 6.22641pt on input line 115. +LaTeX Font Info: Try loading font information for OML+lmm on input line 115. (/usr/share/texmf-dist/tex/latex/lm/omllmm.fd File: omllmm.fd 2009/10/30 v1.6 Font defs for Latin Modern ) LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be -(Font) scaled to size 12.4541pt on input line 116. +(Font) scaled to size 12.4541pt on input line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be -(Font) scaled to size 8.71785pt on input line 116. +(Font) scaled to size 8.71785pt on input line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be -(Font) scaled to size 6.22705pt on input line 116. +(Font) scaled to size 6.22705pt on input line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be -(Font) scaled to size 12.45172pt on input line 116. +(Font) scaled to size 12.45172pt on input line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be -(Font) scaled to size 8.71619pt on input line 116. +(Font) scaled to size 8.71619pt on input line 115. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be -(Font) scaled to size 6.22586pt on input line 116. -LaTeX Font Info: Try loading font information for U+msa on input line 116. +(Font) scaled to size 6.22586pt on input line 115. +LaTeX Font Info: Try loading font information for U+msa on input line 115. (/usr/share/texmf-dist/tex/latex/amsfonts/umsa.fd File: umsa.fd 2013/01/14 v3.01 AMS symbols A ) -LaTeX Font Info: Try loading font information for U+msb on input line 116. +LaTeX Font Info: Try loading font information for U+msb on input line 115. (/usr/share/texmf-dist/tex/latex/amsfonts/umsb.fd File: umsb.fd 2013/01/14 v3.01 AMS symbols B ) -LaTeX Info: Redefining \microtypecontext on input line 116. +LaTeX Info: Redefining \microtypecontext on input line 115. Package microtype Info: Generating PDF output. Package microtype Info: Character protrusion enabled (level 2). Package microtype Info: Using protrusion set `basicmath'. @@ -2065,7 +2065,7 @@ File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Live )) \AtBeginShipoutBox=\box80 -Package hyperref Info: Link coloring OFF on input line 116. +Package hyperref Info: Link coloring OFF on input line 115. (/usr/share/texmf-dist/tex/latex/hyperref/nameref.sty Package: nameref 2016/05/21 v2.44 Cross-referencing by name of section @@ -2075,9 +2075,9 @@ Package: gettitlestring 2016/05/16 v1.5 Cleanup title references (HO) ) \c@section@level=\count481 ) -LaTeX Info: Redefining \ref on input line 116. -LaTeX Info: Redefining \pageref on input line 116. -LaTeX Info: Redefining \nameref on input line 116. +LaTeX Info: Redefining \ref on input line 115. +LaTeX Info: Redefining \pageref on input line 115. +LaTeX Info: Redefining \nameref on input line 115. *geometry* driver: auto-detecting *geometry* detected driver: luatex @@ -2124,7 +2124,7 @@ ABD: EveryShipout initializing macros Package tikz-feynman Warning: Consider loading TikZ-Feynman with \usepackage[com pat=1.1.0]{tikz-feynman} so that you can be warned if TikZ-Feynman changes. on i -nput line 116. +nput line 115. Package caption Info: Begin \AtBeginDocument code. Package caption Info: subfig package v1.3 is loaded. @@ -2139,34 +2139,34 @@ Package biblatex Info: Automatic encoding selection. Package biblatex Info: Trying to load bibliographic data... Package biblatex Info: ... file 'thesis.bbl' found. (./thesis.bbl) -Package biblatex Info: Reference section=0 on input line 116. -Package biblatex Info: Reference segment=0 on input line 116. +Package biblatex Info: Reference section=0 on input line 115. +Package biblatex Info: Reference segment=0 on input line 115. Package microtype Info: Loading generic protrusion settings for font family (microtype) `lmss' (encoding: TU). (microtype) For optimal results, create family-specific settings. (microtype) See the microtype manual for details. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 20.74pt on input line 118. +(Font) scaled to size 20.74pt on input line 117. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 14.4pt on input line 118. +(Font) scaled to size 14.4pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be -(Font) scaled to size 14.94337pt on input line 118. +(Font) scaled to size 14.94337pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be -(Font) scaled to size 10.37735pt on input line 118. +(Font) scaled to size 10.37735pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be -(Font) scaled to size 7.26414pt on input line 118. +(Font) scaled to size 7.26414pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be -(Font) scaled to size 14.9449pt on input line 118. +(Font) scaled to size 14.9449pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be -(Font) scaled to size 10.37842pt on input line 118. +(Font) scaled to size 10.37842pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be -(Font) scaled to size 7.2649pt on input line 118. +(Font) scaled to size 7.2649pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be -(Font) scaled to size 14.94205pt on input line 118. +(Font) scaled to size 14.94205pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be -(Font) scaled to size 10.37643pt on input line 118. +(Font) scaled to size 10.37643pt on input line 117. LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be -(Font) scaled to size 7.2635pt on input line 118. +(Font) scaled to size 7.2635pt on input line 117. (/usr/share/texmf-dist/tex/latex/microtype/mt-msa.cfg File: mt-msa.cfg 2006/02/04 v1.1 microtype config. file: AMS symbols (a) (RS) ) @@ -2174,13 +2174,13 @@ File: mt-msa.cfg 2006/02/04 v1.1 microtype config. file: AMS symbols (a) (RS) File: mt-msb.cfg 2005/06/01 v1.0 microtype config. file: AMS symbols (b) (RS) ) LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 10.95pt on input line 118. +(Font) scaled to size 10.95pt on input line 117. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/bx/n' will be -(Font) scaled to size 10.95pt on input line 118. +(Font) scaled to size 10.95pt on input line 117. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 17.28pt on input line 127. +(Font) scaled to size 17.28pt on input line 126. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/bx/n' will be -(Font) scaled to size 17.28pt on input line 127. +(Font) scaled to size 17.28pt on input line 126. (./thesis.toc LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/bx/n' will be (Font) scaled to size 12.0pt on input line 3. @@ -2194,47 +2194,47 @@ LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/bx/n' will be warning (pdf backend): ignoring duplicate destination with the name 'page.' [2] LaTeX Font Info: Font shape `TU/TimesNewRoman(1)/m/n' will be -(Font) scaled to size 12.0pt on input line 153. +(Font) scaled to size 12.0pt on input line 157. LaTeX Font Info: Font shape `TU/TimesNewRoman(1)/m/n' will be -(Font) scaled to size 8.4pt on input line 153. +(Font) scaled to size 8.4pt on input line 157. LaTeX Font Info: Font shape `TU/TimesNewRoman(1)/m/n' will be -(Font) scaled to size 6.0pt on input line 153. +(Font) scaled to size 6.0pt on input line 157. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 8.4pt on input line 153. +(Font) scaled to size 8.4pt on input line 157. LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 4.2pt on input line 153. +(Font) scaled to size 4.2pt on input line 157. [1] LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/bx/n' will be -(Font) scaled to size 14.4pt on input line 182. -<./figures/sm_wikipedia.pdf, id=101, 1014.37367pt x 927.15916pt> +(Font) scaled to size 14.4pt on input line 190. +<./figures/sm_wikipedia.pdf, id=104, 1014.37367pt x 927.15916pt> File: ./figures/sm_wikipedia.pdf Graphic file (type pdf) -Package luatex.def Info: ./figures/sm_wikipedia.pdf used on input line 209. +Package luatex.def Info: ./figures/sm_wikipedia.pdf used on input line 223. (luatex.def) Requested size: 227.62057pt x 208.05006pt. - [2<./figures/sm_wikipedia.pdf>] [3] [4] [5] + [2<./figures/sm_wikipedia.pdf>] [3] [4] [5] [6] -LaTeX Warning: Citation 'website' on page 6 undefined on input line 383. +LaTeX Warning: Citation 'website' on page 7 undefined on input line 451. -[6] -<./figures/cms_coordinates.png, id=334, 2087.8pt x 1716.4125pt> +[7] +<./figures/cms_coordinates.png, id=347, 2087.8pt x 1716.4125pt> File: ./figures/cms_coordinates.png Graphic file (type png) -Package luatex.def Info: ./figures/cms_coordinates.png used on input line 454. +Package luatex.def Info: ./figures/cms_coordinates.png used on input line 526. (luatex.def) Requested size: 273.14381pt x 224.55573pt. - [7<./figures/cms_coordinates.png>] [8] -<./figures/antikt-comparision.png, id=355, 1134.2375pt x 827.09pt> + [8<./figures/cms_coordinates.png>] [9] +<./figures/antikt-comparision.png, id=369, 1134.2375pt x 827.09pt> File: ./figures/antikt-comparision.png Graphic file (type png) -Package luatex.def Info: ./figures/antikt-comparision.png used on input line 57 -5. +Package luatex.def Info: ./figures/antikt-comparision.png used on input line 65 +0. (luatex.def) Requested size: 455.2446pt x 331.96596pt. - [9] [10<./figures/antikt-comparision.png>] [11] -<./figures/cb_fit.pdf, id=384, 569.12625pt x 315.1775pt> + [10] [11<./figures/antikt-comparision.png>] [12] +<./figures/cb_fit.pdf, id=401, 569.12625pt x 315.1775pt> File: ./figures/cb_fit.pdf Graphic file (type pdf) -Package luatex.def Info: ./figures/cb_fit.pdf used on input line 707. +Package luatex.def Info: ./figures/cb_fit.pdf used on input line 761. (luatex.def) Requested size: 455.26688pt x 252.1231pt. - [12] + [13<./figures/cb_fit.pdf>] Package epstopdf Info: Source file: <./figures/2016/v1_Cleaner_N_jets_stack.eps> (epstopdf) date: 2019-10-14 10:21:50 @@ -2245,15 +2245,15 @@ converted-to.pdf> (epstopdf) size: 8841 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 745. +(epstopdf) \includegraphics on input line 804. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/v1_Cleaner_N_jets_stack-eps-converted-to.pdf, id=394, 569.12625p +<./figures/2016/v1_Cleaner_N_jets_stack-eps-converted-to.pdf, id=457, 569.12625p t x 534.99875pt> File: ./figures/2016/v1_Cleaner_N_jets_stack-eps-converted-to.pdf Graphic file ( type pdf) Package luatex.def Info: ./figures/2016/v1_Cleaner_N_jets_stack-eps-converted-to -.pdf used on input line 745. +.pdf used on input line 804. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/2016/v1_Njet_N_jets_stack.eps> (epstopdf) date: 2019-10-14 10:21:51 @@ -2264,15 +2264,15 @@ verted-to.pdf> (epstopdf) size: 8929 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 748. +(epstopdf) \includegraphics on input line 807. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/v1_Njet_N_jets_stack-eps-converted-to.pdf, id=395, 569.12625pt x +<./figures/2016/v1_Njet_N_jets_stack-eps-converted-to.pdf, id=458, 569.12625pt x 534.99875pt> File: ./figures/2016/v1_Njet_N_jets_stack-eps-converted-to.pdf Graphic file (typ e pdf) Package luatex.def Info: ./figures/2016/v1_Njet_N_jets_stack-eps-converted-to.pd -f used on input line 748. +f used on input line 807. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/v1_Cleaner_N_jets_stack. eps> @@ -2285,15 +2285,15 @@ eps-converted-to.pdf> (epstopdf) Command: -(epstopdf) \includegraphics on input line 751. +(epstopdf) \includegraphics on input line 810. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/v1_Cleaner_N_jets_stack-eps-converted-to.pdf, id=396, 569.12 +<./figures/combined/v1_Cleaner_N_jets_stack-eps-converted-to.pdf, id=459, 569.12 625pt x 534.99875pt> File: ./figures/combined/v1_Cleaner_N_jets_stack-eps-converted-to.pdf Graphic fi le (type pdf) Package luatex.def Info: ./figures/combined/v1_Cleaner_N_jets_stack-eps-converte -d-to.pdf used on input line 751. +d-to.pdf used on input line 810. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/v1_Njet_N_jets_stack.eps > @@ -2305,30 +2305,26 @@ Package epstopdf Info: Source file: <./figures/combined/v1_Njet_N_jets_stack.eps (epstopdf) size: 93077 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 754. +(epstopdf) \includegraphics on input line 813. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/v1_Njet_N_jets_stack-eps-converted-to.pdf, id=397, 569.12625 +<./figures/combined/v1_Njet_N_jets_stack-eps-converted-to.pdf, id=460, 569.12625 pt x 534.99875pt> File: ./figures/combined/v1_Njet_N_jets_stack-eps-converted-to.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/combined/v1_Njet_N_jets_stack-eps-converted-t -o.pdf used on input line 754. +o.pdf used on input line 813. (luatex.def) Requested size: 227.6204pt x 213.9712pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 744--758 +Overfull \hbox (2.0pt too wide) in paragraph at lines 803--817 []$[]$ $[]$ [] -Overfull \hbox (2.0pt too wide) in paragraph at lines 744--758 +Overfull \hbox (2.0pt too wide) in paragraph at lines 803--817 []$ $[]$ [] -[13<./figures/cb_fit.pdf>] [14<./figures/2016/v1_Cleaner_N_jets_stack-eps-conver -ted-to.pdf><./figures/2016/v1_Njet_N_jets_stack-eps-converted-to.pdf><./figures/ -combined/v1_Cleaner_N_jets_stack-eps-converted-to.pdf><./figures/combined/v1_Nje -t_N_jets_stack-eps-converted-to.pdf>] Package epstopdf Info: Source file: <./figures/2016/v1_Njet_deta_stack.eps> (epstopdf) date: 2019-10-14 10:21:51 (epstopdf) size: 34731 bytes @@ -2338,15 +2334,15 @@ rted-to.pdf> (epstopdf) size: 14179 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 774. +(epstopdf) \includegraphics on input line 837. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/v1_Njet_deta_stack-eps-converted-to.pdf, id=519, 569.12625pt x 5 +<./figures/2016/v1_Njet_deta_stack-eps-converted-to.pdf, id=462, 569.12625pt x 5 34.99875pt> File: ./figures/2016/v1_Njet_deta_stack-eps-converted-to.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/2016/v1_Njet_deta_stack-eps-converted-to.pdf - used on input line 774. + used on input line 837. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/2016/v1_Eta_deta_stack.eps> (epstopdf) date: 2019-10-14 10:21:52 @@ -2357,15 +2353,15 @@ ted-to.pdf> (epstopdf) size: 11019 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 777. +(epstopdf) \includegraphics on input line 840. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/v1_Eta_deta_stack-eps-converted-to.pdf, id=520, 569.12625pt x 53 +<./figures/2016/v1_Eta_deta_stack-eps-converted-to.pdf, id=463, 569.12625pt x 53 4.99875pt> File: ./figures/2016/v1_Eta_deta_stack-eps-converted-to.pdf Graphic file (type p df) Package luatex.def Info: ./figures/2016/v1_Eta_deta_stack-eps-converted-to.pdf -used on input line 777. +used on input line 840. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/v1_Njet_deta_stack.eps> (epstopdf) date: 2019-10-14 10:24:56 @@ -2376,15 +2372,15 @@ onverted-to.pdf> (epstopdf) size: 96978 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 780. +(epstopdf) \includegraphics on input line 843. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/v1_Njet_deta_stack-eps-converted-to.pdf, id=521, 569.12625pt +<./figures/combined/v1_Njet_deta_stack-eps-converted-to.pdf, id=464, 569.12625pt x 534.99875pt> File: ./figures/combined/v1_Njet_deta_stack-eps-converted-to.pdf Graphic file (t ype pdf) Package luatex.def Info: ./figures/combined/v1_Njet_deta_stack-eps-converted-to. -pdf used on input line 780. +pdf used on input line 843. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/v1_Eta_deta_stack.eps> (epstopdf) date: 2019-10-14 10:25:01 @@ -2395,23 +2391,23 @@ nverted-to.pdf> (epstopdf) size: 94537 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 783. +(epstopdf) \includegraphics on input line 846. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/v1_Eta_deta_stack-eps-converted-to.pdf, id=522, 569.12625pt +<./figures/combined/v1_Eta_deta_stack-eps-converted-to.pdf, id=465, 569.12625pt x 534.99875pt> File: ./figures/combined/v1_Eta_deta_stack-eps-converted-to.pdf Graphic file (ty pe pdf) Package luatex.def Info: ./figures/combined/v1_Eta_deta_stack-eps-converted-to.p -df used on input line 783. +df used on input line 846. (luatex.def) Requested size: 227.6204pt x 213.9712pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 773--787 +Overfull \hbox (2.0pt too wide) in paragraph at lines 836--850 []$[]$ $[]$ [] -Overfull \hbox (2.0pt too wide) in paragraph at lines 773--787 +Overfull \hbox (2.0pt too wide) in paragraph at lines 836--850 []$ $[]$ [] @@ -2424,15 +2420,15 @@ verted-to.pdf> (epstopdf) size: 10564 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 802. +(epstopdf) \includegraphics on input line 865. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/v1_Eta_invMass_stack-eps-converted-to.pdf, id=524, 569.12625pt x +<./figures/2016/v1_Eta_invMass_stack-eps-converted-to.pdf, id=467, 569.12625pt x 534.99875pt> File: ./figures/2016/v1_Eta_invMass_stack-eps-converted-to.pdf Graphic file (typ e pdf) Package luatex.def Info: ./figures/2016/v1_Eta_invMass_stack-eps-converted-to.pd -f used on input line 802. +f used on input line 865. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/2016/v1_invmass_invMass_stack.eps > @@ -2444,15 +2440,15 @@ Package epstopdf Info: Source file: <./figures/2016/v1_invmass_invMass_stack.eps (epstopdf) size: 10507 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 805. +(epstopdf) \includegraphics on input line 868. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/v1_invmass_invMass_stack-eps-converted-to.pdf, id=525, 569.12625 +<./figures/2016/v1_invmass_invMass_stack-eps-converted-to.pdf, id=468, 569.12625 pt x 534.99875pt> File: ./figures/2016/v1_invmass_invMass_stack-eps-converted-to.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/2016/v1_invmass_invMass_stack-eps-converted-t -o.pdf used on input line 805. +o.pdf used on input line 868. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/v1_Eta_invMass_stack.eps > @@ -2464,15 +2460,15 @@ Package epstopdf Info: Source file: <./figures/combined/v1_Eta_invMass_stack.eps (epstopdf) size: 94622 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 808. +(epstopdf) \includegraphics on input line 871. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/v1_Eta_invMass_stack-eps-converted-to.pdf, id=526, 569.12625 +<./figures/combined/v1_Eta_invMass_stack-eps-converted-to.pdf, id=469, 569.12625 pt x 534.99875pt> File: ./figures/combined/v1_Eta_invMass_stack-eps-converted-to.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/combined/v1_Eta_invMass_stack-eps-converted-t -o.pdf used on input line 808. +o.pdf used on input line 871. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/v1_invmass_invMass_stack .eps> @@ -2485,33 +2481,36 @@ Package epstopdf Info: Source file: <./figures/combined/v1_invmass_invMass_stack (epstopdf) Command: -(epstopdf) \includegraphics on input line 811. +(epstopdf) \includegraphics on input line 874. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/v1_invmass_invMass_stack-eps-converted-to.pdf, id=527, 569.1 +<./figures/combined/v1_invmass_invMass_stack-eps-converted-to.pdf, id=470, 569.1 2625pt x 534.99875pt> File: ./figures/combined/v1_invmass_invMass_stack-eps-converted-to.pdf Graphic f ile (type pdf) Package luatex.def Info: ./figures/combined/v1_invmass_invMass_stack-eps-convert -ed-to.pdf used on input line 811. +ed-to.pdf used on input line 874. (luatex.def) Requested size: 227.6204pt x 213.9712pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 801--816 +Overfull \hbox (2.0pt too wide) in paragraph at lines 864--879 []$[]$ $[]$ [] -Overfull \hbox (2.0pt too wide) in paragraph at lines 801--816 +Overfull \hbox (2.0pt too wide) in paragraph at lines 864--879 []$ $[]$ [] -[15<./figures/2016/v1_Njet_deta_stack-eps-converted-to.pdf><./figures/2016/v1_Et -a_deta_stack-eps-converted-to.pdf><./figures/combined/v1_Njet_deta_stack-eps-con -verted-to.pdf><./figures/combined/v1_Eta_deta_stack-eps-converted-to.pdf>] -[16<./figures/2016/v1_Eta_invMass_stack-eps-converted-to.pdf><./figures/2016/v1_ -invmass_invMass_stack-eps-converted-to.pdf><./figures/combined/v1_Eta_invMass_st -ack-eps-converted-to.pdf><./figures/combined/v1_invmass_invMass_stack-eps-conver -ted-to.pdf>] +[14] [15<./figures/2016/v1_Cleaner_N_jets_stack-eps-converted-to.pdf><./figures/ +2016/v1_Njet_N_jets_stack-eps-converted-to.pdf><./figures/combined/v1_Cleaner_N_ +jets_stack-eps-converted-to.pdf><./figures/combined/v1_Njet_N_jets_stack-eps-con +verted-to.pdf>] [16<./figures/2016/v1_Njet_deta_stack-eps-converted-to.pdf><./fi +gures/2016/v1_Eta_deta_stack-eps-converted-to.pdf><./figures/combined/v1_Njet_de +ta_stack-eps-converted-to.pdf><./figures/combined/v1_Eta_deta_stack-eps-converte +d-to.pdf>] [17<./figures/2016/v1_Eta_invMass_stack-eps-converted-to.pdf><./figur +es/2016/v1_invmass_invMass_stack-eps-converted-to.pdf><./figures/combined/v1_Eta +_invMass_stack-eps-converted-to.pdf><./figures/combined/v1_invmass_invMass_stack +-eps-converted-to.pdf>] Package epstopdf Info: Source file: <./figures/2016/DATA/v1_invmass_N_jets.eps> (epstopdf) date: 2019-10-08 10:32:27 (epstopdf) size: 27341 bytes @@ -2521,15 +2520,15 @@ onverted-to.pdf> (epstopdf) size: 11008 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 849. +(epstopdf) \includegraphics on input line 916. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/DATA/v1_invmass_N_jets-eps-converted-to.pdf, id=667, 569.12625pt +<./figures/2016/DATA/v1_invmass_N_jets-eps-converted-to.pdf, id=683, 569.12625pt x 534.99875pt> File: ./figures/2016/DATA/v1_invmass_N_jets-eps-converted-to.pdf Graphic file (t ype pdf) Package luatex.def Info: ./figures/2016/DATA/v1_invmass_N_jets-eps-converted-to. -pdf used on input line 849. +pdf used on input line 916. (luatex.def) Requested size: 150.2272pt x 141.21887pt. Package epstopdf Info: Source file: <./figures/2016/DATA/v1_invmass_deta.eps> (epstopdf) date: 2019-10-08 10:32:27 @@ -2540,15 +2539,15 @@ verted-to.pdf> (epstopdf) size: 12591 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 852. +(epstopdf) \includegraphics on input line 919. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/DATA/v1_invmass_deta-eps-converted-to.pdf, id=668, 569.12625pt x +<./figures/2016/DATA/v1_invmass_deta-eps-converted-to.pdf, id=684, 569.12625pt x 534.99875pt> File: ./figures/2016/DATA/v1_invmass_deta-eps-converted-to.pdf Graphic file (typ e pdf) Package luatex.def Info: ./figures/2016/DATA/v1_invmass_deta-eps-converted-to.pd -f used on input line 852. +f used on input line 919. (luatex.def) Requested size: 150.2272pt x 141.21887pt. Package epstopdf Info: Source file: <./figures/2016/DATA/v1_invmass_invMass.eps> @@ -2560,15 +2559,15 @@ converted-to.pdf> (epstopdf) size: 13585 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 855. +(epstopdf) \includegraphics on input line 922. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/DATA/v1_invmass_invMass-eps-converted-to.pdf, id=669, 569.12625p +<./figures/2016/DATA/v1_invmass_invMass-eps-converted-to.pdf, id=685, 569.12625p t x 534.99875pt> File: ./figures/2016/DATA/v1_invmass_invMass-eps-converted-to.pdf Graphic file ( type pdf) Package luatex.def Info: ./figures/2016/DATA/v1_invmass_invMass-eps-converted-to -.pdf used on input line 855. +.pdf used on input line 922. (luatex.def) Requested size: 150.2272pt x 141.21887pt. Package epstopdf Info: Source file: <./figures/combined/DATA/v1_invmass_N_jets.e ps> @@ -2581,15 +2580,15 @@ ps-converted-to.pdf> (epstopdf) Command: -(epstopdf) \includegraphics on input line 858. +(epstopdf) \includegraphics on input line 925. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/DATA/v1_invmass_N_jets-eps-converted-to.pdf, id=670, 569.126 +<./figures/combined/DATA/v1_invmass_N_jets-eps-converted-to.pdf, id=686, 569.126 25pt x 534.99875pt> File: ./figures/combined/DATA/v1_invmass_N_jets-eps-converted-to.pdf Graphic fil e (type pdf) Package luatex.def Info: ./figures/combined/DATA/v1_invmass_N_jets-eps-converted --to.pdf used on input line 858. +-to.pdf used on input line 925. (luatex.def) Requested size: 150.2272pt x 141.21887pt. Package epstopdf Info: Source file: <./figures/combined/DATA/v1_invmass_deta.eps > @@ -2601,15 +2600,15 @@ Package epstopdf Info: Source file: <./figures/combined/DATA/v1_invmass_deta.eps (epstopdf) size: 13384 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 861. +(epstopdf) \includegraphics on input line 928. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/DATA/v1_invmass_deta-eps-converted-to.pdf, id=671, 569.12625 +<./figures/combined/DATA/v1_invmass_deta-eps-converted-to.pdf, id=687, 569.12625 pt x 534.99875pt> File: ./figures/combined/DATA/v1_invmass_deta-eps-converted-to.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/combined/DATA/v1_invmass_deta-eps-converted-t -o.pdf used on input line 861. +o.pdf used on input line 928. (luatex.def) Requested size: 150.2272pt x 141.21887pt. Package epstopdf Info: Source file: <./figures/combined/DATA/v1_invmass_invMass. eps> @@ -2622,17 +2621,17 @@ eps-converted-to.pdf> (epstopdf) Command: -(epstopdf) \includegraphics on input line 864. +(epstopdf) \includegraphics on input line 931. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/DATA/v1_invmass_invMass-eps-converted-to.pdf, id=672, 569.12 +<./figures/combined/DATA/v1_invmass_invMass-eps-converted-to.pdf, id=688, 569.12 625pt x 534.99875pt> File: ./figures/combined/DATA/v1_invmass_invMass-eps-converted-to.pdf Graphic fi le (type pdf) Package luatex.def Info: ./figures/combined/DATA/v1_invmass_invMass-eps-converte -d-to.pdf used on input line 864. +d-to.pdf used on input line 931. (luatex.def) Requested size: 150.2272pt x 141.21887pt. - [17<./figures/2016/DATA/v1_invmass_N_jets-eps-converted-to.pdf><./figures/2016/ + [18<./figures/2016/DATA/v1_invmass_N_jets-eps-converted-to.pdf><./figures/2016/ DATA/v1_invmass_deta-eps-converted-to.pdf><./figures/2016/DATA/v1_invmass_invMas s-eps-converted-to.pdf><./figures/combined/DATA/v1_invmass_N_jets-eps-converted- to.pdf><./figures/combined/DATA/v1_invmass_deta-eps-converted-to.pdf><./figures/ @@ -2648,15 +2647,15 @@ _1-eps-converted-to.pdf> (epstopdf) Command: -(epstopdf) \includegraphics on input line 899. +(epstopdf) \includegraphics on input line 963. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf, id=743, 569 +<./figures/2016/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf, id=759, 569 .12625pt x 534.99875pt> File: ./figures/2016/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/2016/sideband/v1_SDM_SoftDropMass_1-eps-conve -rted-to.pdf used on input line 899. +rted-to.pdf used on input line 963. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/2016/sideband/v1_SDM_invMass.eps> @@ -2668,15 +2667,15 @@ converted-to.pdf> (epstopdf) size: 14094 bytes (epstopdf) Command: -(epstopdf) \includegraphics on input line 902. +(epstopdf) \includegraphics on input line 966. Package epstopdf Info: Output file is already uptodate. -<./figures/2016/sideband/v1_SDM_invMass-eps-converted-to.pdf, id=744, 569.12625p +<./figures/2016/sideband/v1_SDM_invMass-eps-converted-to.pdf, id=760, 569.12625p t x 534.99875pt> File: ./figures/2016/sideband/v1_SDM_invMass-eps-converted-to.pdf Graphic file ( type pdf) Package luatex.def Info: ./figures/2016/sideband/v1_SDM_invMass-eps-converted-to -.pdf used on input line 902. +.pdf used on input line 966. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/sideband/v1_SDM_SoftDrop Mass_1.eps> @@ -2689,15 +2688,15 @@ Mass_1-eps-converted-to.pdf> (epstopdf) Command: -(epstopdf) \includegraphics on input line 905. +(epstopdf) \includegraphics on input line 969. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf, id=745, +<./figures/combined/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf, id=761, 569.12625pt x 534.99875pt> File: ./figures/combined/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf Gra phic file (type pdf) Package luatex.def Info: ./figures/combined/sideband/v1_SDM_SoftDropMass_1-eps-c -onverted-to.pdf used on input line 905. +onverted-to.pdf used on input line 969. (luatex.def) Requested size: 227.6204pt x 213.9712pt. Package epstopdf Info: Source file: <./figures/combined/sideband/v1_SDM_invMass. eps> @@ -2710,188 +2709,197 @@ eps-converted-to.pdf> (epstopdf) Command: -(epstopdf) \includegraphics on input line 908. +(epstopdf) \includegraphics on input line 972. Package epstopdf Info: Output file is already uptodate. -<./figures/combined/sideband/v1_SDM_invMass-eps-converted-to.pdf, id=746, 569.12 +<./figures/combined/sideband/v1_SDM_invMass-eps-converted-to.pdf, id=762, 569.12 625pt x 534.99875pt> File: ./figures/combined/sideband/v1_SDM_invMass-eps-converted-to.pdf Graphic fi le (type pdf) Package luatex.def Info: ./figures/combined/sideband/v1_SDM_invMass-eps-converte -d-to.pdf used on input line 908. +d-to.pdf used on input line 972. (luatex.def) Requested size: 227.6204pt x 213.9712pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 898--911 +Overfull \hbox (2.0pt too wide) in paragraph at lines 962--975 []$[]$ $[]$ [] -Overfull \hbox (2.0pt too wide) in paragraph at lines 898--911 +Overfull \hbox (2.0pt too wide) in paragraph at lines 962--975 []$ $[]$ [] -[18] [19<./figures/2016/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf><./f -igures/2016/sideband/v1_SDM_invMass-eps-converted-to.pdf><./figures/combined/sid -eband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf><./figures/combined/sideband/v1 -_SDM_invMass-eps-converted-to.pdf>] +[19<./figures/2016/sideband/v1_SDM_SoftDropMass_1-eps-converted-to.pdf><./figure +s/2016/sideband/v1_SDM_invMass-eps-converted-to.pdf><./figures/combined/sideband +/v1_SDM_SoftDropMass_1-eps-converted-to.pdf><./figures/combined/sideband/v1_SDM_ +invMass-eps-converted-to.pdf>] LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/n' will be -(Font) scaled to size 6.0pt on input line 949. +(Font) scaled to size 6.0pt on input line 1018. [20] -<./figures/sig-db.pdf, id=798, 569.12625pt x 385.44pt> +<./figures/sig-db.pdf, id=810, 569.12625pt x 385.44pt> File: ./figures/sig-db.pdf Graphic file (type pdf) -Package luatex.def Info: ./figures/sig-db.pdf used on input line 1020. +Package luatex.def Info: ./figures/sig-db.pdf used on input line 1100. (luatex.def) Requested size: 227.6204pt x 154.15562pt. -<./figures/sig-tau.pdf, id=799, 569.12625pt x 385.44pt> +<./figures/sig-tau.pdf, id=811, 569.12625pt x 385.44pt> File: ./figures/sig-tau.pdf Graphic file (type pdf) -Package luatex.def Info: ./figures/sig-tau.pdf used on input line 1023. +Package luatex.def Info: ./figures/sig-tau.pdf used on input line 1103. (luatex.def) Requested size: 227.6204pt x 154.15562pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 1019--1025 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1099--1105 []$[]$ $[]$ [] -[21<./figures/sig-db.pdf><./figures/sig-tau.pdf>] -LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/it' will be -(Font) scaled to size 12.0pt on input line 1070. - [22] [23] -<./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf, id=914, 569.12625pt x +[21] [22<./figures/sig-db.pdf><./figures/sig-tau.pdf>] [23] +<./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf, id=930, 569.12625pt x 408.52625pt> File: ./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf - used on input line 1233. + used on input line 1232. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -<./figures/results/brazilianFlag_QtoqW_2016db_13TeV.pdf, id=915, 569.12625pt x 4 +<./figures/results/brazilianFlag_QtoqW_2016db_13TeV.pdf, id=931, 569.12625pt x 4 08.52625pt> File: ./figures/results/brazilianFlag_QtoqW_2016db_13TeV.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqW_2016db_13TeV.pdf - used on input line 1236. + used on input line 1235. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -<./figures/results/brazilianFlag_QtoqZ_2016tau_13TeV.pdf, id=916, 569.12625pt x +<./figures/results/brazilianFlag_QtoqZ_2016tau_13TeV.pdf, id=932, 569.12625pt x 408.52625pt> File: ./figures/results/brazilianFlag_QtoqZ_2016tau_13TeV.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqZ_2016tau_13TeV.pdf - used on input line 1239. + used on input line 1238. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -<./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf, id=917, 569.12625pt x 4 +<./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf, id=933, 569.12625pt x 4 08.52625pt> File: ./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf Graphic file (type pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf - used on input line 1242. + used on input line 1241. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 1232--1245 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1231--1244 []$[]$ $[]$ [] -Overfull \hbox (2.0pt too wide) in paragraph at lines 1232--1245 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1231--1244 []$ $[]$ [] -[24] [25<./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf><./figures/resu -lts/brazilianFlag_QtoqW_2016db_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_ -2016tau_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf>] -<./figures/results/prev_qW.png, id=1107, 1032.85875pt x 751.80875pt> +[24<./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf><./figures/results/b +razilianFlag_QtoqW_2016db_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_2016t +au_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf>] +<./figures/results/prev_qW.png, id=1119, 1032.85875pt x 751.80875pt> File: ./figures/results/prev_qW.png Graphic file (type png) -Package luatex.def Info: ./figures/results/prev_qW.png used on input line 1264. +Package luatex.def Info: ./figures/results/prev_qW.png used on input line 1269. (luatex.def) Requested size: 227.62363pt x 165.68523pt. -<./figures/results/prev_qZ.png, id=1108, 1300.86pt x 934.49126pt> +<./figures/results/prev_qZ.png, id=1120, 1300.86pt x 934.49126pt> File: ./figures/results/prev_qZ.png Graphic file (type png) -Package luatex.def Info: ./figures/results/prev_qZ.png used on input line 1267. +Package luatex.def Info: ./figures/results/prev_qZ.png used on input line 1272. (luatex.def) Requested size: 227.61421pt x 163.5099pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 1263--1270 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1268--1275 []$[]$ $[]$ [] -[26<./figures/results/prev_qW.png><./figures/results/prev_qZ.png>] [27] -<./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf, id=1128, 569.12625 +<./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf, id=1125, 569.12625 pt x 408.52625pt> File: ./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf Graphic file ( type pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV -.pdf used on input line 1414. +.pdf used on input line 1318. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -<./figures/results/brazilianFlag_QtoqW_Combineddb_13TeV.pdf, id=1129, 569.12625p +<./figures/results/brazilianFlag_QtoqW_Combineddb_13TeV.pdf, id=1126, 569.12625p t x 408.52625pt> File: ./figures/results/brazilianFlag_QtoqW_Combineddb_13TeV.pdf Graphic file (t ype pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqW_Combineddb_13TeV. -pdf used on input line 1417. +pdf used on input line 1321. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -<./figures/results/brazilianFlag_QtoqZ_Combinedtau_13TeV.pdf, id=1130, 569.12625 +<./figures/results/brazilianFlag_QtoqZ_Combinedtau_13TeV.pdf, id=1127, 569.12625 pt x 408.52625pt> File: ./figures/results/brazilianFlag_QtoqZ_Combinedtau_13TeV.pdf Graphic file ( type pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqZ_Combinedtau_13TeV -.pdf used on input line 1420. +.pdf used on input line 1324. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -<./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV.pdf, id=1131, 569.12625p +<./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV.pdf, id=1128, 569.12625p t x 408.52625pt> File: ./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV.pdf Graphic file (t ype pdf) Package luatex.def Info: ./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV. -pdf used on input line 1423. +pdf used on input line 1327. (luatex.def) Requested size: 227.6204pt x 163.3889pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 1413--1426 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1317--1330 []$[]$ $[]$ [] -Overfull \hbox (2.0pt too wide) in paragraph at lines 1413--1426 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1317--1330 []$ $[]$ [] -<./figures/limit_comp_w.pdf, id=1133, 569.12625pt x 385.44pt> +[25<./figures/results/prev_qW.png><./figures/results/prev_qZ.png>] [26<./figures +/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf><./figures/results/brazilianF +lag_QtoqW_Combineddb_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_Combinedta +u_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV.pdf>] +<./figures/limit_comp_2018.pdf, id=1324, 569.12625pt x 385.44pt> +File: ./figures/limit_comp_2018.pdf Graphic file (type pdf) + +Package luatex.def Info: ./figures/limit_comp_2018.pdf used on input line 1367. + +(luatex.def) Requested size: 455.26688pt x 308.32889pt. +<./figures/limit_comp_w.pdf, id=1325, 569.12625pt x 385.44pt> File: ./figures/limit_comp_w.pdf Graphic file (type pdf) -Package luatex.def Info: ./figures/limit_comp_w.pdf used on input line 1450. +Package luatex.def Info: ./figures/limit_comp_w.pdf used on input line 1372. (luatex.def) Requested size: 227.6204pt x 154.15562pt. -<./figures/limit_comp_z.pdf, id=1134, 569.12625pt x 385.44pt> +<./figures/limit_comp_z.pdf, id=1326, 569.12625pt x 385.44pt> File: ./figures/limit_comp_z.pdf Graphic file (type pdf) -Package luatex.def Info: ./figures/limit_comp_z.pdf used on input line 1453. +Package luatex.def Info: ./figures/limit_comp_z.pdf used on input line 1375. (luatex.def) Requested size: 227.6204pt x 154.15562pt. -Overfull \hbox (2.0pt too wide) in paragraph at lines 1449--1456 +Overfull \hbox (2.0pt too wide) in paragraph at lines 1371--1378 []$[]$ $[]$ [] -[28] [29<./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf><./figures/ -results/brazilianFlag_QtoqW_Combineddb_13TeV.pdf><./figures/results/brazilianFla -g_QtoqZ_Combinedtau_13TeV.pdf><./figures/results/brazilianFlag_QtoqZ_Combineddb_ -13TeV.pdf><./figures/limit_comp_w.pdf><./figures/limit_comp_z.pdf>] [30] -Underfull \hbox (badness 10000) in paragraph at lines 1493--1493 +[27<./figures/limit_comp_2018.pdf>] [28<./figures/limit_comp_w.pdf><./figures/li +mit_comp_z.pdf>] [29 + +] +LaTeX Font Info: Font shape `TU/TimesNewRoman(0)/m/it' will be +(Font) scaled to size 12.0pt on input line 1446. + +Underfull \hbox (badness 10000) in paragraph at lines 1447--1447 []\TU/TimesNewRoman(0)/m/n/12 Florian Beau-dette. ‘ The CMS Particle Flow Al -gorithm’. In: \TU/TimesNewRoman(0)/m/it/12 arXiv e-prints\TU/TimesNewRoman(0 )/m/n/12 , [] -[31] [32] -Package atveryend Info: Empty hook `BeforeClearDocument' on input line 1494. -Package atveryend Info: Empty hook `AfterLastShipout' on input line 1494. +[30] [31] [32] [33] [34] [35] +Package atveryend Info: Empty hook `BeforeClearDocument' on input line 1668. +Package atveryend Info: Empty hook `AfterLastShipout' on input line 1668. (./thesis.aux) -Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 1494. -Package atveryend Info: Empty hook `AtEndAfterFileList' on input line 1494. +Package atveryend Info: Executing hook `AtVeryEndDocument' on input line 1668. +Package atveryend Info: Empty hook `AtEndAfterFileList' on input line 1668. LaTeX Warning: There were undefined references. @@ -2907,24 +2915,24 @@ Package logreq Info: Writing requests to 'thesis.run.xml'. ) Here is how much of LuaTeX's memory you used: - 54963 strings out of 494300 + 54981 strings out of 494300 125171,2848258 words of node,token memory allocated - 732 words of node memory still in use: - 7 hlist, 2 vlist, 2 rule, 5 glue, 4 kern, 1 glyph, 25 attribute, 72 glue_spec + 737 words of node memory still in use: + 7 hlist, 2 vlist, 2 rule, 5 glue, 4 kern, 1 glyph, 25 attribute, 73 glue_spec , 25 attribute_list, 1 write, 7 pdf_action nodes - avail lists: 1:6,2:4312,3:892,4:67,5:122,6:46,7:4972,8:85,9:535,10:16,11:242 - 57013 multiletter control sequences out of 65536+600000 - 186 fonts using 56483563 bytes + avail lists: 1:5,2:3009,3:613,4:67,5:126,6:46,7:5242,8:85,9:526,10:25,11:290 + 57020 multiletter control sequences out of 65536+600000 + 186 fonts using 56483939 bytes 74i,13n,118p,1009b,2406s stack positions out of 5000i,500n,10000p,200000b,100000s -Output written on thesis.pdf (34 pages, 1873352 bytes). +Output written on thesis.pdf (37 pages, 1894583 bytes). -PDF statistics: 1646 PDF objects out of 1728 (max. 8388607) - 1100 compressed objects within 11 object streams - 159 named destinations out of 1000 (max. 131072) +PDF statistics: 1732 PDF objects out of 2073 (max. 8388607) + 1155 compressed objects within 12 object streams + 167 named destinations out of 1000 (max. 131072) 304 words of extra memory for PDF output out of 10000 (max. 100000000) diff --git a/thesis.md b/thesis.md index 766f8ca..418b8cb 100644 --- a/thesis.md +++ b/thesis.md @@ -8,7 +8,6 @@ header-includes: | \usepackage{tikz-feynman} \usepackage{csquotes} \pagenumbering{gobble} - \setlength{\parindent}{1.0em} \setlength{\parskip}{0.5em} \bibliographystyle{lucas_unsrt} abstract: | @@ -41,49 +40,59 @@ The Standard Model is a very successful theory in describing most of the effects lot of shortcomings that show that it isn't yet a full "theory of everything". To solve these shortcomings, lots of theories beyond the standard model exist that try to explain some of them. -One category of such theories is based on a composite quark model. They predict that quarks consist of particles unknown -to us so far or can bind to other particles using unknown forces. This could explain some symmetries between particles +One category of such theories is based on a composite quark model. Quarks are currently considered elementary particles +by the Standard Model. The composite quark models on the other hand predict that quarks consist of particles unknown +to us so far or can bind to other particles using unknown forces. This could explain the symmetries between particles and reduce the number of constants needed to explain the properties of the known particles. One common prediction of those theories are excited quark states. Those are quark states of higher energy that can decay to an unexcited quark -under the emission of a boson. These decays are the topic of this thesis. +under the emission of a boson. This thesis will search for their decay to a quark and a W/Z boson. The W/Z boson then +decays in the hadronic channel, to two more quarks. The endstate of this decay has only quarks, making Quantum +Chromodynamics effects the main background. -In previous research, a lower limit for the mass of an excited quark has already been set using data from the 2016 run -of the Large Hadron Collider with an integrated luminosity of $\SI{35.92}{\per\femto\barn}$. Since then, a lot more data -has been collected, totalling to $\SI{137.19}{\per\femto\barn}$. This thesis uses this new data as well as a new -technique to identify decays of highly boosted particles based on a deep neural network to further improve this limit -and therefore exclude the excited quark particle to even higher masses. It will also compare this new tagging technique -to an older tagger based on jet substructure studies used in the previous research. +In a previous research [@PREV_RESEARCH], a lower limit for the mass of an excited quark has already been set using data +from the 2016 run of the Large Hadron Collider with an integrated luminosity of $\SI{35.92}{\per\femto\barn}$. Since +then, a lot more data has been collected, totalling to $\SI{137.19}{\per\femto\barn}$ of data usable for research. This +thesis uses this new data as well as a new technique to identify decays of highly boosted particles based on a deep +neural network. By using more data and new tagging techniques, it aims to either confirm the existence of the q\* +particle or improve the previously set lower limit of 5 TeV respectively 4.7 TeV for the decay to qW respectively qZ on +its mass to even higher values. It will also directly compare the performance of this new tagging technique to an older +tagger based on jet substructure studies used in the previous research. -First, a theoretical background will be presented explaining in short the Standard Model, its shortcomings and the -theory of excited quarks. Then the Large Hadron Collider and the Compact Muon Solenoid, the detector that collected the -data for this analysis, will be described. After that, the main analysis part follows, describing how the data was used -to extract limits on the mass of the excited quark particle. At the very end, the results are presented and compared to -previous research. +In chapter 2, a theoretical background will be presented explaining in short the Standard Model, its shortcomings and +the theory of excited quarks. Then, in chapter 3, the Large Hadron Collider and the Compact Muon Solenoid, the detector +that collected the data for this analysis, will be described. After that, in chapters 4-7, the main analysis part +follows, describing how the data was used to extract limits on the mass of the excited quark particle. At the very end, +in chapter 8, the results are presented and compared to previous research. \newpage -# Theoretical background +# Theoretical motivation This chapter presents a short summary of the theoretical background relevant to this thesis. It first gives an introduction to the standard model itself and some of the issues it raises. It then goes on to explain the background processes of quantum chromodynamics and the theory of q*, which will be the main topic of this thesis. -## Standard model +## Standard model {#sec:sm} -The Standard Model of physics proofed very successful in describing three of the four fundamental interactions currently -known: the electromagnetic, weak and strong interaction. The fourth, gravity, could not yet be successfully included in -this theory. +The Standard Model of physics proved to be very successful in describing three of the four fundamental interactions +currently known: the electromagnetic, weak and strong interaction. The fourth, gravity, could not yet be successfully +included in this theory. The Standard Model divides all particles into spin-$\frac{n}{2}$ fermions and spin-n bosons, where n could be any integer but so far is only known to be one for fermions and either one (gauge bosons) or zero (scalar bosons) for -bosons. The fermions are further divided into quarks and leptons. Each of those exists in six so called flavours. -Furthermore, quarks and leptons can also be divided into three generations, each of which contains two particles. -In the lepton category, each generation has one charged lepton and one neutrino, that has no charge. Also, the mass of -the neutrinos is not yet known, only an upper bound has been established. A full list of particles known to the -standard model can be found in [@fig:sm]. Furthermore, all fermions have an associated anti particle with reversed -charge. Multiple quarks can form bound states called hadrons (e.g. proton and neutron). +bosons. Fermions are further classified into quarks and leptons. +Quarks and leptons can also be categorized into three generations, each of which contains two particles, also called +flavours. For leptons, the three generations each consist of a lepton and its corresponding neutrino, namely first the +electron, then the muon and third, the tau. The three quark generations consist of first, the up and down, second, the +charm and strange, and third, the top and bottom quark. So overall, their exists a total of six quark and six lepton +flavours. A full list of particles known to the standard model can be found in [@fig:sm]. Furthermore, all fermions have +an associated anti particle with reversed charge. -![Elementary particles of the Standard Model and their mass charge and spin. +The matter around us, is built from so called hadrons, that are bound states of quarks, for example protons and +neutrons. Long lived hadrons consist of up and down quarks, as the heavier ones over time decay to those. + +![ +Elementary particles of the Standard Model and their mass charge and spin. ](./figures/sm_wikipedia.pdf){width=50% #fig:sm} The gauge bosons, namely the photon, $W^\pm$ bosons, $Z^0$ boson, and gluon, are mediators of the different @@ -116,15 +125,94 @@ The probability of a quark changing its flavour from $i$ to $j$ is given by the matrix element $V_{ij}$. It is easy to see, that the change of flavour in the same generation is way more likely than any other flavour change. -The quantum chromodynamics (QCD) describe the strong interaction of particles. It applies to all -particles carrying colour (e.g. quarks). The force is mediated by the gluon. This boson carries colour as well, -although it doesn't carry only one colour but rather a combination of a colour and an anticolour, and can therefore -interact with itself and exists in eight different variant. As a result of this, processes, where a gluon decays into -two gluons are possible. Furthermore the strong force, binding to colour carrying particles, increases with their -distance r making it at a certain point more energetically efficient to form a new quark - antiquark pair than -separating the two particles even further. This effect is known as colour confinement. Due to this effect, colour -carrying particles can't be observed directly, but rather form so called jets that cause hadronic showers in the -detector. An effect called Hadronisation. +Due to their high masses of 80.39 GeV resp. 91.19 GeV, the $W^\pm$ and $Z^0$ bosons themselves decay very quickly. +Either in the leptonic or hadronic decay channel. In the leptonic channel, the $W^\pm$ decays to a lepton and the +corresponding anti-lepton neutrino, in the hadronic channel it decays to a quark and an anti-quark of a different +flavour. Due to the $Z^0$ boson having no charge, it always decays to a fermion and its anti-particle, in the leptonic +channel this might be for example a electron - positron pair, in the hadronic channel an up and anti-up quark pair. This +thesis examines the hadronic decay channel, where both vector bosons essentially decay to to quarks. + +The quantum chromodynamics (QCD) describes the strong interaction of particles. It applies to all +particles carrying colour (e.g. quarks). The force is mediated by gluons. These bosons carry colour as well, +although they don't carry only one colour but rather a combination of a colour and an anticolour, and can therefore +interact with themselves and exist in eight different variants. As a result of this, processes, where a gluon decays +into two gluons are possible. Furthermore the strength of the strong force, binding to colour carrying particles, +increases with their distance making it at a certain point more energetically efficient to form a new quark - antiquark +pair than separating the two particles even further. This effect is known as colour confinement. Due to this effect, +colour carrying particles can't be observed directly, but rather form so called jets that cause hadronic showers in the +detector. Those jets are cone like structures made of hadrons and other particles. The effect is called Hadronisation. + +### Shortcomings of the Standard Model + +While being very successful in describing the effects observed in particle colliders or the particles reaching earth +from cosmological sources, the Standard Model still has several shortcomings. + +- **Gravity**: as already noted, the standard model doesn't include gravity as a force. +- **Dark Matter**: observations of the rotational velocity of galaxies can't be explained by the known matter. Dark + matter currently is our best theory to explain those. +- **Matter-antimatter asymmetry**: The amount of matter vastly outweights the amount of antimatter in the observable + universe. This can't be explained by the standard model, which predicts a similar amount of matter and antimatter. +- **Symmetries between particles**: Why do exactly three generations of fermions exist? Why is the charge of a quark + exactly one third of the charge of a lepton? How are the masses of the particles related? Those and more questions + cannot be answered by the standard model. +- **Hierarchy problem**: The weak force is approximately $10^{24}$ times stronger than gravity and so far, there's no + satisfactory explanation as to why that is. + +## Excited quark states {#sec:qs} + +One category of theories that try to explain the symmetries between particles of the standard model are the composite +quark models. Those state, that quarks consist of some particles unknown to us so far. This could explain the symmetries +between the different fermions. A common prediction of those models are excited quark states (q\*, q\*\*, q\*\*\*...). +Similar to atoms, that can be excited by the absorption of a photon and can then decay again under emission of a photon +with an energy corresponding to the excited state, those excited quark states could decay under the emission of any +boson. Quarks are smaller than $10^{-18}$ m. This corresponds to an energy scale of approximately 1 TeV. Therefore the +excited quark states are expected to be in that region. That will cause the emitted boson to be highly boosted. + +\begin{figure} +\centering +\feynmandiagram [large, horizontal=qs to v] { + a -- qs -- b, + qs -- [fermion, edge label=\(q*\)] v, + q1 [particle=\(q\)] -- v -- w [particle=\(W\)], + q2 [particle=\(q\)] -- w -- q3 [particle=\(q\)], +}; +\caption{Feynman diagram showing a possible decay of a q* particle to a W boson and a quark with the W boson also +decaying to two quarks.} \label{fig:qsfeynman} +\end{figure} +This thesis will search data collected by the CMS in the years 2016, 2017 and 2018 for the single excited quark state +q\* which can decay to a quark and any boson. An example of a q\* decaying to a quark and a W boson can be seen in +[@fig:qsfeynman]. As explained in [@sec:sm], the vector boson can then decay either in the hadronic or leptonic decay +channel. This research investigates only the hadronic channel with two quarks in the endstate. Because the boson is +highly boosted, those will be very close together and therefore appear to the detector as only one jet. This means that +the decay of a q\* particle will have two jets in the endstate (assuming the W/Z boson decays to two quarks) and will +therefore be hard to distinguish from the QCD background described in [@sec:qcdbg]. + +The choice of only examining the decay of the q\* particle to the vector bosons is motivated by the branching ratios +calculated for the decay [@QSTAR_THEORY]: + + +: Branching ratios of the decaying q\* particle. + +| decay mode | br. ratio [%] | decay mode | br. ratio [%] | +|---------------------------|---------------|---------------------------|---------------| +| $U^* \rightarrow ug$ | 83.4 | $D^* \rightarrow dg$ | 83.4 | +| $U^* \rightarrow dW$ | 10.9 | $D^* \rightarrow uW$ | 10.9 | +| $U^* \rightarrow u\gamma$ | 2.2 | $D^* \rightarrow d\gamma$ | 0.5 | +| $U^* \rightarrow uZ$ | 3.5 | $D^* \rightarrow dZ$ | 5.1 | + +The decay to the vector bosons have the second highest branching ratio. The decay to a gluon and a quark is the dominant +decay, but virtually impossible to distinguish from the QCD background described in the next section. This makes the +decay to the vector bosons the obvious choice. + +To reconstruct the mass of the q\* particle from an event successfully recognized to be the decay of such a particle, +the dijet invariant mass has to be calculated. This can be achieved by adding their four momenta, vectors consisting of +the energy and momentum of a particle, together. From the four momentum it's easy to derive the mass by solving +$E=\sqrt{p^2 + m^2}$ for m. + +This theory has already been investigated in [@PREV_RESEARCH] analysing data recorded by CMS in 2016, excluding the q\* +particle up to a mass of 5 TeV resp. 4.7 TeV for the decay to qW resp. qZ analysing the hadronic decay of the vector +boson. This thesis aims to either exclude the particle to higher masses or find a resonance showing its existence using +the higher center of mass energy of the LHC as well as more data that is available now. ### Quantum Chromodynamic background {#sec:qcdbg} @@ -133,8 +221,8 @@ signal processes from QCD effects. Those can also produce two jets in the endsta They are also happening very often in a proton proton collision, as it is happening in the Large Hadron Collider. This is caused by the structure of the proton. It not only consists of three quarks, called valence quarks, but also of a lot of quark-antiquark pairs connected by gluons, called the sea quarks, that exist due to the self interaction of the -gluons binding the three valence quarks. Therefore in a proton - proton collision, interactions of gluons and quarks are -the main processes causing a very strong QCD background. +gluons binding the three valence quarks. Therefore the QCD multijet backgroubd is the dominant background of the signal +described in [@sec:qs]. \begin{figure} \centering @@ -151,56 +239,6 @@ the main processes causing a very strong QCD background. \caption{Two examples of QCD processes resulting in two jets.} \label{fig:qcdfeynman} \end{figure} -### Shortcomings of the Standard Model - -While being very successful in describing mostly all of the effects we can observe in particle colliders so far, the -Standard Model still has several shortcomings. - -- **Gravity**: as already noted, the standard model doesn't include gravity as a force. -- **Dark Matter**: observations of the rotational velocity of galaxies can't be explained by the known matter. Dark - matter currently is our best theory to explain those. -- **Matter-antimatter assymetry**: The amount of matter vastly outweights the amount of - antimatter in the observable universe. This can't be explained by the standard model, which predicts a similar amount - of matter and antimatter. -- **Symmetries between particles**: Why do exactly three generations of fermions exist? Why is the charge of a quark - exactly one third of the charge of a lepton? How are the masses of the particles related? Those and more questions - cannot be answered by the standard model. -- **Hierarchy problem**: The weak force is approximately $10^{24}$ times stronger than gravity and so far, there's no - satisfactory explanation as to why that is. - -## Excited quark states {#sec:qs} - -One category of theories that try to solve some of the shortcomings of the standard model are the composite quark -models. Those state, that quarks consist of some particles unknown to us so far. This could explain the symmetries -between the different fermions. A common prediction of those models are excited quark states (q\*, q\*\*, q\*\*\*...). -Similar to atoms, that can be excited by the absorption of a photon and can then decay again under emission of a photon -with an energy corresponding to the excited state, those excited quark states could decay under the emission of some -boson. Quarks are smaller than $10^{-18}$ m, due to that, excited states have to be of very high energy. That will cause -the emitted boson to be highly boosted. - -\begin{figure} -\centering -\feynmandiagram [large, horizontal=qs to v] { - a -- qs -- b, - qs -- [fermion, edge label=\(q*\)] v, - q1 [particle=\(q\)] -- v -- w [particle=\(W\)], - q2 [particle=\(q\)] -- w -- q3 [particle=\(q\)], -}; -\caption{Feynman diagram showing a possible decay of a q* particle to a W boson and a quark with the W boson also -decaying to two quarks.} \label{fig:qsfeynman} -\end{figure} -This thesis will search data collected by the CMS in the years 2016, 2017 and 2018 for the single excited quark state -q\* which can decay to a quark and any boson. An example of a q\* decaying to a quark and a W boson can be seen in -[@fig:qsfeynman]. The boson quickly further decays into for example two quarks. Because the boson is highly boosted, -those will be very close together and therefore appear to the detector as only one jet. This means that the decay of a -q\* particle will have two jets in the endstate (assuming the W/Z boson decays to two quarks) and will therefore be hard -to distinguish from the QCD background described in [@sec:qcdbg]. - -To reconstruct the mass of the q\* particle from an event successfully recognized to be the decay of such a particle, -the dijet invariant mass, the mass of the two jets in the final state, can be calculated by adding their four momenta, -vectors consisting of the energy and momentum of a particle, together. From the four momentum it's easy to derive the -mass by solving $E=\sqrt{p^2 + m^2}$ for m. - \newpage # Experimental Setup @@ -210,9 +248,9 @@ Following on, the experimental setup used to gather the data analysed in this th ## Large Hadron Collider The Large Hadron Collider is the world's largest and most powerful particle accelerator [@website]. It has a perimeter -of 27 km and can collide protons at a centre of mass energy of 13 TeV. It is home to several experiments, the biggest of -those are ATLAS and the Compact Muon Solenoid (CMS). Both are general-purpose detectors to investigate the particles -that form during particle collisions. +of 27 km and can accelerate two beams of protons to an energy of 6.5 TeV resulting in a collision with a centre of mass +energy of 13 TeV. It is home to several experiments, the biggest of those are ATLAS and the Compact Muon Solenoid (CMS). +Both are general-purpose detectors to investigate the particles that form during particle collisions. Particle colliders are characterized by their luminosity L. It is a quantity to be able to calculate the number of events per second generated in a collision by $N_{event} = L\sigma_{event}$ with $\sigma_{event}$ being the cross @@ -237,7 +275,7 @@ $L_{int} = \int L dt$. ## Compact Muon Solenoid -The data used in this thesis was captured by the Compact Muon Solenoid (CMS). It is one of the biggest experiments at +The data used in this thesis was recorded by the Compact Muon Solenoid (CMS). It is one of the four main experiments at the Large Hadron Collider. It can detect all elementary particles of the standard model except neutrinos. For that, it has an onion like setup. The particles produced in a collision first go through a tracking system. They then pass an electromegnetic as well as a hadronic calorimeter. This part is surrounded by a superconducting solenoid that generates @@ -249,13 +287,15 @@ $\SI{137.19}{\per\femto\barn}$. ### Coordinate conventions -Per convention, the z axis points along the beam axis, the y axis upwards and the x axis horizontal towards the LHC -centre. Furthermore, the azimuthal angle $\phi$, which describes the angle in the x - y plane, the polar angle $\theta$, -which describes the angle in the y - z plane and the pseudorapidity $\eta$, which is defined as $\eta = --ln\left(tan\frac{\theta}{2}\right)$ are introduced. The coordinates are visualised in [@fig:cmscoords]. Furthermore, -to describe a particles momentum, often the transverse momentum, $p_t$ is used. It is the component of the momentum -transversal to the beam axis. It is a useful quantity, because the sum of all transverse momenta has to be zero. -Missing transverse momentum implies particles that weren't detected such as neutrinos. +Per convention, the z axis points along the beam axis in the direction of the magnetic fields of the solenoid, the y +axis upwards and the x axis horizontal towards the LHC centre. The azimuthal angle $\phi$, which describes the angle in +the x - y plane, the polar angle $\theta$, which describes the angle in the y - z plane and the pseudorapidity $\eta$, +which is defined as $\eta = -ln\left(tan\frac{\theta}{2}\right)$ are also introduced. The coordinates are visualised in +[@fig:cmscoords]. Furthermore, to describe a particle's momentum, often the transverse momentum, $p_t$ is used. It is +the component of the momentum transversal to the beam axis. Before the collision, the transverse momentum obviously has +to be zero, therefore, due to conservation of energy, the sum of all transverse momenta after the collision has to be +zero, too. If this is not the case for the detected events, it implies particles that weren't detected such as +neutrinos. ![Coordinate conventions of the CMS illustrating the use of $\eta$ and $\phi$. The Z axis is in beam direction. Taken from https://inspirehep.net/record/1236817/plots @@ -263,17 +303,18 @@ $\phi$. The Z axis is in beam direction. Taken from https://inspirehep.net/recor ### The tracking system -The tracking system is built of two parts, first a pixel detector and then silicon strip sensors. It is used to -reconstruct the tracks of charged particles, measuring their charge sign, direction and momentum. It is as close to the -collision as possible to be able to identify secondary vertices. +The tracking system is built of two parts, closest to the collision is a pixel detector and around that silicon strip +sensors. They are used to reconstruct the tracks of charged particles, measuring their charge sign, direction and +momentum. They are as close to the collision as possible to be able to identify secondary vertices. ### The electromagnetic calorimeter -The electromagnetic calorimeter measures the energy of photons and electrons. It is made of tungstate crystal. -When passed by particles, it produces light in proportion to the particle's energy. This light is measured by -photodetectors that convert this scintillation light to an electrical signal. To measure a particles energy, it has to -leave its whole energy in the ECAL, which is true for photons and electrons, but not for other particles such as -hadrons and muons. They too leave some energy in the ECAL. +The electromagnetic calorimeter measures the energy of photons and electrons. It is made of tungstate crystal and +photodetectors. When passed by particles, the crystal produces light in proportion to the particle's energy. This light +is measured by the photodetectors that convert this scintillation light to an electrical signal. To measure a particles +energy, it has to leave its whole energy in the ECAL, which is true for photons and electrons, but not for other +particles such as hadrons and muons. Those have are of higher energy and therefore only leave some energy in the ECAL +but are not stopped by it. ### The hadronic calorimeter @@ -322,9 +363,10 @@ algorithm. It arises from a generalization of several other clustering algorithm and SISCone clustering algorithms. The anti-$k_t$ clustering algorithm associates hard particles with their soft particles surrounding them within a radius -R in the $\eta$ - $\phi$ plane forming cone like jets. If two jets overlap, the jets shape is changed according to its -hardness. A softer particles jet will change its shape more than a harder particles. A visual comparison of four -different clustering algorithms can be seen in [@fig:antiktcomparision]. For this analysis, a radius of 0.8 is used. +$R = \sqrt{\eta^2 - \phi^2}$ in the $\eta$ - $\phi$ plane forming cone like jets. If two jets overlap, the jets shape is +changed according to its hardness in regards to the transverse momentum. A softer particles jet will change its shape +more than a harder particles. A visual comparison of four different clustering algorithms can be seen in +[@fig:antiktcomparison]. For this analysis, a radius of 0.8 is used. Furthermore, to approximate the mass of a heavy particle that caused a jet, the softdropmass can be used. It is calculated by removing wide angle soft particles from the jet to counter the effects of contamination from initial state @@ -332,53 +374,40 @@ radiation, underlying event and multiple hadron scattering. It therefore is more particle causing a jet than taking the mass of all constituent particles of the jet combined. ![ -Comparision of the $k_t$, Cambridge/Aachen, SISCone and anti-$k_t$ algorithms clustering a sample parton-level event -with many random soft "ghosts". Taken from -](./figures/antikt-comparision.png){#fig:antiktcomparision} +Comparison of the $k_t$, Cambridge/Aachen, SISCone and anti-$k_t$ algorithms clustering a sample parton-level event +with many random soft "ghosts". Taken from [@ANTIKT] +](./figures/antikt-comparision.png){#fig:antiktcomparison} + +[@fig:antiktcomparison] clearly shows, that the jets reconstructed using the anti-$k_t$ algorithm are closest to having +a cone like shape and are so fucking beautiful. \newpage -# Method of analysis +# Method of analysis {#sec:moa} This section gives an overview over how the data gathered by the LHC and CMS is going to be analysed to be able to either exclude the q\* particle to even higher masses than already done or maybe confirm its existence. -As described in [@sec:qs], an excited quark q\* can decay to a quark and any boson. The branching ratios are calculated -to be as follows [@QSTAR_THEORY]: - - -: Branching ratios of the decaying q\* particle. - -| decay mode | br. ratio [%] | decay mode | br. ratio [%] | -|---------------------------|---------------|---------------------------|---------------| -| $U^* \rightarrow ug$ | 83.4 | $D^* \rightarrow dg$ | 83.4 | -| $U^* \rightarrow dW$ | 10.9 | $D^* \rightarrow uW$ | 10.9 | -| $U^* \rightarrow u\gamma$ | 2.2 | $D^* \rightarrow d\gamma$ | 0.5 | -| $U^* \rightarrow uZ$ | 3.5 | $D^* \rightarrow dZ$ | 5.1 | - -The majority of excited quarks will decay to a quark and a gluon, but as this is virtually impossible to distinguish -from QCD effects (for example from the qg $\rightarrow$ qg processes), this analysis will focus on the processes q\* -$\rightarrow$ qW and q\* $\rightarrow$ qZ. In this case, due to jet substructure studies, it is possible to establish a -discriminator between QCD background and jets originating in a W/Z decay. They still make up roughly 20 % of the signal -events to study and therefore seem like a good choice. +As described in [@sec:qs], the decay of the q\* particle to a quark and a vector boson with the vector boson then +decaying hadronically will be investigated. This is the second most probable decay of the q\* particle and easier to +analyse than the dominant decay to a quark and a gluon. Therefore it is a good choice for this research. The data studied was collected by the CMS experiment in the years 2016, 2017 and 2018. It is analysed with the Particle Flow algorithm to reconstruct jets and all the other particles forming during the collision. The jets are then clustered -using the anti-$k_t$ algorithm with the distance parameter R being 0.8. Furthermore, the calorimeters of the CMS -detector have to be calibrated. For that, jet energy corrections published by the CMS working group are applied to the -data. +using the anti-$k_t$ algorithm with the distance parameter R being 0.8. -To find signal events in the data, this thesis looks at the dijet invariant mass distribution. The data is assumed to -only consist of QCD background and signal events, other backgrounds are neglected. Cuts on several distributions are -introduced to reduce the background and improve the sensitivity for the signal. If the q\* particle exists, the dijet -invariant mass distribution should show a resonance at its invariant mass. This resonance will be looked for with -statistical methods explained later on. +To find the signal events, described in [@sec:qs], in the data, this thesis looks at the dijet invariant mass +distribution. The only background considered is the QCD background described in [@sec:qcdbg]. A selection using +different kinematic variables as well as a tagger to identify jets from the decay of a vector boson is introduced to +reduce the background and increase the sensitivity for the signal. After that, it will be looked for a peak in the dijet +invariant mass distribution at the resonance mass of the q\* particle. The analysis will be conducted with two different sets of data. First, only the data collected by CMS in 2016 will be used to compare the results to the previous analysis [@PREV_RESEARCH]. Then the combined data from 2016, 2017 and 2018 -will be used to improve the previously set limits for the mass of the q\* particle. Also, two different tagging -mechanisms will be used. One based on the N-subjettiness variable used in the previous research, the other being a novel -approach using a deep neural network. +will be used to improve the previously set limits for the mass of the q\* particle. Also, two different V-tagging +mechanisms will be used to compare their performance. One based on the N-subjettiness variable used in the previous +research [@PREV_RESEARCH], the other being a novel approach using a deep neural network, that will be explained in the +following. ## Signal and Background modelling @@ -405,11 +434,14 @@ The signal is fitted using a double sided crystal ball function. It has six para - sigma: the functions width, in this case the resolution of the detector - n1, n2, alpha1, alpha2: parameters influencing the shape of the left and right tail -A gaussian and a poisson have also been studied but found to not fit the signal sample very well as they aren't able to -fit the tail on both sides of the peak. +A gaussian and a poisson function have also been studied but found to be not able to reproduce the signal shape as they +couldn't model the tails on both sides of the peak. An example of a fit of these functions to a toy dataset with gaussian errors can be seen in [@fig:cb_fit]. In this -figure, a binning of 200 GeV is used. For the actual analysis a 1 GeV binning will be used. +figure, a binning of 200 GeV is used. For the actual analysis a 1 GeV binning will be used. It can be seen that the fit +works very well and therefore confirms the functions chosen to model signal and background. This is supported by a +$\chi^2 /$ ndof of 0.5 and a found mean for the signal at 2999 $\pm$ 23 $\si{\giga\eV}$ which is extremely close to the +expected 3000 GeV mean. Those numbers clearly show that the method in use is able to successfully describe the data. ![ Combined fit of signal and background on a toy dataset with gaussian errors and a simulated resonance mass of 3 TeV. @@ -419,24 +451,26 @@ Combined fit of signal and background on a toy dataset with gaussian errors and # Preselection and data quality -To separate the background from the signal, cuts on several distributions have to be introduced. The selection of events -is divided into -two parts. The first one (the preselection) adds some general physics motivated cuts and is also used to make sure a -good trigger efficiency is achieved. It is not expected to already provide a good separation of background and signal. -In the second part, different taggers will be used as a discriminator between QCD background and signal events. After -the preselection, it is made sure, that the simulated samples represent the real data well. +To reduce the background and increase the signal sensitivity, a selection of events by different variables is +introduced. It is divided into two stages. The first one (the preselection) adds some general physics motivated +selection using kinematic variables and is also used to make sure a good trigger efficiency is achieved. In the second +part, different taggers will be used as a discriminator between QCD background and signal events. After the +preselection, it is made sure, that the simulated samples represent the real data well by comparing the data with the +simulation in the signal as well as a sideband region, where no signal events are expected. ## Preselection First, all events are cleaned of jets with a $p_t < \SI{200}{\giga\eV}$ and a pseudorapidity $|\eta| > 2.4$. This is to -discard soft background and to make sure the particles are in the barrel region of the detector for an optimal detector -resolution. Furthermore, all events with one of the two highest $p_t$ jets having an angular separation smaller +discard soft background and to make sure the particles are in the barrel region of the detector for an optimal track +reconstruction. Furthermore, all events with one of the two highest $p_t$ jets having an angular separation smaller than 0.8 from any electron or muon are discarded to allow future use of the results in studies of the semi or all-leptonic decay channels. -From a decaying q\* particle, we expect two jets in the endstate. Therefore a cut is added to have at least 2 jets. -More jets are also possible, for example caused by gluon radiation of a quark causing another jet. The cut can be seen -in [@fig:njets]. +From a decaying q\* particle, we expect two jets in the endstate. The dijet invariant mass of those two jets will be +used to reconstruct the mass of the q\* particle. Therefore a cut is added to have at least 2 jets. +More jets are also possible, for example caused by gluon radiation of a quark causing another jet. If this is the case, +the two jets with the highest $p_t$ are used for the reconstruction of the q\* mass. +The distributions of the number of jets before and after the selection can be seen in [@fig:njets]. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -457,11 +491,13 @@ are amplified by a factor of 10,000, to be visible.} \label{fig:njets} \end{figure} -Another cut is on $\Delta\eta$. The q\* particle is expected to be very heavy in regards to the center of mass energy of -the collision and will therefore be almost stationary. Its decay products should therefore be close to back to back, -which means the $\Delta\eta$ distribution is expected to peak at 0. At the same time, particles originating from QCD -effects are expected to have a higher $\Delta\eta$ as they mainly form from less heavy resonances. To maintain -comparability, the same cut as in previous research of $\Delta\eta \le 1.3$ is used as can be seen in [@fig:deta]. +The next selection is done using $\Delta\eta = |\eta_1 - \eta_2|$, with $\eta_1$ and $\eta_2$ being the $\eta$ of the +first two jets in regards to their transverse momentum. The q\* particle is expected to be very heavy in regards to the +center of mass energy of the collision and will therefore be almost stationary. Its decay products should therefore be +close to back to back, which means the $\Delta\eta$ distribution is expected to peak at 0. At the same time, particles +originating from QCD effects are expected to have a higher $\Delta\eta$ as they mainly form from less heavy resonances. +To maintain comparability, the same selection as in previous research of $\Delta\eta \le 1.3$ is used. A comparison of +the $\Delta\eta$ distribution before and after the selection can be seen in [@fig:deta]. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -482,11 +518,11 @@ are amplified by a factor of 10,000, to be visible.} \label{fig:deta} \end{figure} -The last cut in the preselection is on the dijet invariant mass: $m_{jj} \ge \SI{1050}{\giga\eV}$. It is important for a -high trigger efficiency and can be seen in [@fig:invmass]. Also, it has a huge impact on the background because it +The last selection in the preselection is on the dijet invariant mass: $m_{jj} \ge \SI{1050}{\giga\eV}$. It is important +for a high trigger efficiency and can be seen in [@fig:invmass]. Also, it has a huge impact on the background because it usually consists of way lighter particles. The q\* on the other hand is expected to have a very high invariant mass of -more than 1 TeV. The distribution should be a smoothly falling function for the QCD background and peak at the simulated -resonance mass for the signal events. +more than 1 TeV. The $m_{jj}$ distribution should be a smoothly falling function for the QCD background and peak at the +simulated resonance mass for the signal events. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -514,16 +550,18 @@ preselection is reduced to 5 % of the original events. For the combined data of similar. Decaying to qW signal efficiencies between 49 % (1.6 TeV) and 56 % (7 TeV) are reached, wheres the efficiencies when decaying to qZ are in the range of 46 % (1.6 TeV) to 50 % (7 TeV). Here, the background could be reduced to 8 % of the original events. So while keeping around 50 % of the signal, the background was already reduced to less than a -tenth. Still, as can be seen in [@fig:njets] to [@fig:invmass], the amount of signal is very low and, without -logarithmic scale, even has to be amplified to be visible. +tenth. Still, as can be seen in [@fig:njets] to [@fig:invmass], the amount of signal is very low. ## Data - Monte Carlo Comparison To ensure high data quality, the simulated QCD background sample is now being compared to the actual data of the corresponding year collected by the CMS detector. This is done for the year 2016 and for the combined data of years 2016, 2017 and 2018. The distributions are rescaled so the integral over the invariant mass distribution of data and -simulation are the same. In [@fig:data-mc], the three distributions that cuts were applied on can be seen for year 2016 -and the combined data of years 2016 to 2018. +simulation are the same. In [@fig:data-mc], the three distributions of the variables that were used for the preselection +can be seen for year 2016 and the combined data of years 2016 to 2018. +For analysing the real data from the CMS, jet energy corrections have to be applied. Those are to calibrate the ECAL and +HCAL parts of the CMS, so the energy of the detected particles can be measured correctly. The corrections used were +published by the CMS group. [source needed, but not sure where to find it] \begin{figure} \begin{minipage}{0.33\textwidth} @@ -555,13 +593,11 @@ and simulation. ### Sideband -The sideband is introduced to make sure there are no unwanted side effects of the used cuts. It is a region in which no -data is used for the actual analysis. Again, data and the Monte Carlo simulation are compared. For this analysis, the +The sideband is introduced to make sure no bias in the data and Monte Carlo simulation is introduced. It is a region in +which no signal event is expected. Again, data and the Monte Carlo simulation are compared. For this analysis, the region where the softdropmass of both of the two jets with the highest transverse momentum ($p_t$) is more than 105 GeV -was chosen. Because the decay of a q\* to a vector boson is being investigated, later on, a selection is applied that -one of those particles has to have a mass between 105 GeV and 35 GeV. Therefore events with jets with a softdropmass -higher than 105 GeV will not be used for this analysis which makes them a good sideband to use. - +was chosen. 105 GeV is well above the mass of 91 GeV of the Z boson, the heavier vector boson. Therefore it is very +unlikely that a particle heavier than t In [@fig:sideband], the comparison of data with simulation in the sideband region can be seen for the softdropmass distribution as well as the dijet invariant mass distribution. As in [fig:data-mc], the histograms are rescaled, so that the dijet invariant mass distributions of data and simulation have the same integral. @@ -589,14 +625,14 @@ combined data from 2016, 2017 and 2018.} # Jet substructure selection -So far it was made sure, that the actual data and the simulation match well after the preselection and no unwanted side -effects are introduced in the data by the used cuts. Now another selection has to be introduced, to further reduce the -background to be able to extract the hypothetical signal events from the actual data. +So far it was made sure, that the actual data and the simulation are in good agreement after the preselection and no +unwanted side effects are introduced in the data by the used cuts. Now another selection has to be introduced, to +further reduce the background to be able to extract the hypothetical signal events from the actual data. This is done by distinguishing between QCD and signal events using a tagger to identify jets coming -from a vector boson. Two different taggers will be used to later compare the results. The decay analysed includes either -a W or Z boson, which are, compared to the particles in QCD effects, very heavy. This can be used by adding a cut on the -softdropmass of a jet. The softdropmass of at least one of the two leading jets is expected to be within +from a vector boson. Two different taggers will be used to later compare their performance. The decay analysed includes +either a W or Z boson, which are, compared to the particles in QCD effects, very heavy. This can be used by adding a cut +on the softdropmass of a jet. The softdropmass of at least one of the two leading jets is expected to be within $\SI{35}{\giga\eV}$ and $\SI{105}{\giga\eV}$. This cut already provides a good separation of QCD and signal events, on which the two taggers presented next can build. @@ -605,7 +641,7 @@ QCD effects. This value will be optimized afterwards to make sure the maximum ef ## N-Subjettiness -The N-subjettiness $\tau_n$ is a jet shape parameter designed to identify boosted hadronically-decaying objects. When a +The N-subjettiness $\tau_N$ is a jet shape parameter designed to identify boosted hadronically-decaying objects. When a vector boson decays hadronically, it produces two quarks each causing a jet. But because of the high mass of the vector bosons, the particles are highly boosted and appear, after applying a clustering algorithm, as just one. This algorithm now tries to figure out, whether one jet might consist of two subjets by using the kinematics and positions of the @@ -651,7 +687,7 @@ vector boson. Therefore, using the same way to choose a candidate jet as for the applied so that this candidate jet has a WvsQCD/ZvsQCD value greater than some value determined by the optimization presented next. -## Optimization +## Optimization {#sec:opt} To figure out the best value to cut on the discriminators introduced by the two taggers, a value to quantify how good a cut is has to be introduced. For that, the significance calculated by $\frac{S}{\sqrt{B}}$ will be used. S stands for @@ -660,7 +696,9 @@ error on the background so it will be calculated for the 2 TeV masspoint where e this assumption. It follows from the central limit theorem that states, that for identical distributed random variables, their sum converges to a gaussian distribution. The significance therefore represents how good the signal can be distinguished from the background in units of the standard deviation of the background. As interval, a 10 % margin -around the masspoint is chosen. +around the resonance nominal mass is chosen. The significance is then calculated for different selections on the +discriminant of the two taggers and then plotted in dependence on the minimum resp. maximum allowed value of the +discriminant to pass the selection for the deep boosted resp. the N-subjettiness tagger. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -678,13 +716,14 @@ boosted cut is placed at $\ge 0.95$. For the deep boosted tagger, 0.97 would giv it is very close to the edge where the significance drops very low and the higher the cut the less background will be left to calculate the cross section limits, especially at higher resonance masses, the slightly less strict cut is chosen. -The significance for the $\tau_{21}$ cut is 14.08, and for the deep boosted tagger 25.61. +The significance for the $\tau_{21}$ cut is 14, and for the deep boosted tagger 26. + For both taggers also a low purity category is introduced for high TeV regions. Using the cuts optimized for 2 TeV, there are very few background events left for higher resonance masses, but to reliably calculate cross section limits, those are needed. As low purity category for the N-subjettiness tagger, a cut at $0.35 < \tau_{21} < 0.75$ is used. For the deep boosted tagger the opposite cut from the high purity category is used: $VvsQCD < 0.95$. -# Signal extraction +# Signal extraction {#sec:extr} After the optimization, now the optimal selection for the N-subjettiness as well as the deep boosted tagger is found and applied to the simulated samples as well as the data collected by the CMS. The fit described in [@sec:moa] is performed @@ -702,99 +741,45 @@ uncertainty with the observed limit is also calculated. ## Uncertainties -The following uncertainties are considered: +For calculating the cross section of the signal, four sources of uncertainties are considered. -- *Luminosity*: the integrated luminosity of the LHC has an uncertainty of 2.5 %. -- *Jet Energy Corrections*: for the Jet Energy Corrections, an uncertainty of 2 % is assumed. -- *Tagger Efficiency(?)*: 6 % (TODO!) -- *Parameter Uncertainty of the fit*: The CombinedLimit program used for determining the cross section varies the - parameters used for the fit and therefore includes their uncertainties to calculate the final result. +First, the uncertainty of the Jet Energy Corrections. When measuring a particle's energy with the ECAL or HCAL part of +the CMS, the electronic signals send by the photodetectors in the calorimeters have to be converted to actual energy +values. Therefore an error in this calibration causes the energy measured to be shifted to higher or lower values +causing also the position of the signal peak in the $m_{jj}$ distribution to vary. The uncertainty is approximated to be +2 %. + +Second, the tagger is not perfect and therefore some events, that don't originate from a V boson are wrongly chosen and +on the other hand sometimes events that do originate from one are not. It influences the events chose for analysis and +is therefore also considered as an uncertainty, which is approximated to be 6 %. + +Third, the uncertainty of the parameters of the background fit is also considered, as it might change the background +shape a little and therefore influence how many signal and background events are reconstructed from the data. + +Fourth, the uncertainty on the Luminosity of the LHC of 2.5 % is also taken into account for the final results. # Results -In this chapter the results and a comparison to previous research will be shown as well as a comparison between the two -different taggers used. +This chapter will start by presenting the results for the data of year 2016 using both taggers and comparing it to the +previous research [@PREV_RESEARCH]. It will then go on showing the results for the combined dataset, again using both +taggers comparing their performances. ## 2016 -Using the data collected by the CMS experiment on 2016, the cross section limits seen in [@fig:res2016] were obtained. -The extracted cross section limits are: +Using the data collected by the CMS experiment on 2016, the cross section limits seen in [@fig:res2016] were obtained. + +As described in [@sec:extr], the calculated cross section limits are used to then calculate a mass limit, meaning the +lowest possible mass of the q\* particle, by finding the crossing of the theory line with the observed cross section +limit. In [@fig:res2016] it can be seen, that the observed limit in the region where theory and observed limit cross is +very high compared to when using the N-subjettiness tagger. Therefore the two lines cross earlier, which results in +lower exclusion limits on the mass of the q\* particle causing the deep boosted tagger to perform worse than the +N-subjettiness tagger in regards of establishing those limits as can be seen in {@tbl:res2016}. The table also shows the +upper and lower limits on the mass found by calculating the crossing of the theory plus resp. minus its uncertainty. Due +to the theory and the observed limits line being very flat in the high TeV region, even a small uncertainty of the +theory can cause a high difference of the mass limit. -: Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qW - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.10406 | 0.14720 | 0.07371 | 0.08165 | -| 1.8 | 0.07656 | 0.10800 | 0.05441 | 0.04114 | -| 2.0 | 0.05422 | 0.07605 | 0.03879 | 0.04043 | -| 2.5 | 0.02430 | 0.03408 | 0.01747 | 0.04052 | -| 3.0 | 0.01262 | 0.01775 | 0.00904 | 0.02109 | -| 3.5 | 0.00703 | 0.00992 | 0.00502 | 0.00399 | -| 4.0 | 0.00424 | 0.00603 | 0.00300 | 0.00172 | -| 4.5 | 0.00355 | 0.00478 | 0.00273 | 0.00249 | -| 5.0 | 0.00269 | 0.00357 | 0.00211 | 0.00240 | -| 6.0 | 0.00103 | 0.00160 | 0.00068 | 0.00062 | -| 7.0 | 0.00063 | 0.00105 | 0.00039 | 0.00086 | - - -: Cross Section limits using 2016 data and the deep boosted tagger for the decay to qW - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.17750 | 0.25179 | 0.12572 | 0.38242 | -| 1.8 | 0.11125 | 0.15870 | 0.07826 | 0.11692 | -| 2.0 | 0.08188 | 0.11549 | 0.05799 | 0.09528 | -| 2.5 | 0.03328 | 0.04668 | 0.02373 | 0.03653 | -| 3.0 | 0.01648 | 0.02338 | 0.01181 | 0.01108 | -| 3.5 | 0.00840 | 0.01195 | 0.00593 | 0.00683 | -| 4.0 | 0.00459 | 0.00666 | 0.00322 | 0.00342 | -| 4.5 | 0.00276 | 0.00412 | 0.00190 | 0.00366 | -| 5.0 | 0.00177 | 0.00271 | 0.00118 | 0.00401 | -| 6.0 | 0.00110 | 0.00175 | 0.00071 | 0.00155 | -| 7.0 | 0.00065 | 0.00108 | 0.00041 | 0.00108 | - - -: Cross Section limits using 2016 data and the N-subjettiness tagger for the decay to qZ - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.08687 | 0.12254 | 0.06174 | 0.06987 | -| 1.8 | 0.06719 | 0.09477 | 0.04832 | 0.03424 | -| 2.0 | 0.04734 | 0.06640 | 0.03405 | 0.03310 | -| 2.5 | 0.01867 | 0.02619 | 0.01343 | 0.03214 | -| 3.0 | 0.01043 | 0.01463 | 0.00744 | 0.01773 | -| 3.5 | 0.00596 | 0.00840 | 0.00426 | 0.00347 | -| 4.0 | 0.00353 | 0.00500 | 0.00250 | 0.00140 | -| 4.5 | 0.00233 | 0.00335 | 0.00164 | 0.00181 | -| 5.0 | 0.00157 | 0.00231 | 0.00110 | 0.00188 | -| 6.0 | 0.00082 | 0.00126 | 0.00054 | 0.00049 | -| 7.0 | 0.00050 | 0.00083 | 0.00031 | 0.00066 | - - -: Cross Section limits using 2016 data and deep boosted tagger for the decay to qZ - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.16687 | 0.23805 | 0.11699 | 0.35999 | -| 1.8 | 0.12750 | 0.17934 | 0.09138 | 0.12891 | -| 2.0 | 0.09062 | 0.12783 | 0.06474 | 0.09977 | -| 2.5 | 0.03391 | 0.04783 | 0.02422 | 0.03754 | -| 3.0 | 0.01781 | 0.02513 | 0.01277 | 0.01159 | -| 3.5 | 0.00949 | 0.01346 | 0.00678 | 0.00741 | -| 4.0 | 0.00494 | 0.00711 | 0.00349 | 0.00362 | -| 4.5 | 0.00293 | 0.00429 | 0.00203 | 0.00368 | -| 5.0 | 0.00188 | 0.00284 | 0.00127 | 0.00426 | -| 6.0 | 0.00102 | 0.00161 | 0.00066 | 0.00155 | -| 7.0 | 0.00053 | 0.00085 | 0.00034 | 0.00085 | - - -As can be seen in [@fig:res2016], the observed limit in the region where theory and observed limit cross is very high -compared to when using the N-subjettiness tagger. Therefore the two lines cross earlier, which results in lower -exclusion limits on the mass of the q\* particle. - - -: Mass limits found using the data collected in 2016 +: Mass limits found using the data collected in 2016 {#tbl:res2016} | Decay | Tagger | Limit [TeV] | Upper Limit [TeV] | Lower Limit [TeV] | |-------|--------------|-------------|-------------------|-------------------| @@ -825,12 +810,15 @@ exclusion limits on the mass of the q\* particle. ### Previous research -The limit is already slightly higher than the one from previous research, which was found to be 5 TeV for the decay to -qW and 4.7 TeV for the decay to qZ. This is mainly due to the fact, that in our data, the observed limit at the -intersection point happens to be in the lower region of the expected limit interval and therefore causing a very late -crossing with the theory line when using the N-subjettiness tagger (as can be seen in [@fig:res2016]. This could be -caused by small differences of the setup used or slightly differently processed data. In general, the results appear to -be very similar to the previous research, seen in [@fig:prev]. +The limit established by using the N-subjettiness tagger on the 2016 data is already slightly higher than the one from +previous research, which was found to be 5 TeV for the decay to qW and 4.7 TeV for the decay to qZ. This is mainly due +to the fact, that in our data, the observed limit at the intersection point happens to be in the lower region of the +expected limit interval and therefore causing a very late crossing with the theory line when using the N-subjettiness +tagger (as can be seen in [@fig:res2016]). This could be caused by small differences of the setup used or slightly +differently processed data. Comparing the expected limits, there is a difference between 3 % and 30 %, between the +values calculated by this thesis compared to the previous research. It is not, however, that one of the two results was +constantly lower or higher but rather fluctuating. Therefore it can be said, that the results are in good agreement. The +cross section limits of the previous research can be seen in [@fig:prev]. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -844,81 +832,11 @@ Taken from \cite{PREV_RESEARCH}.} \label{fig:prev} \end{figure} -## 2016 + 2017 + 2018 - -Using the combined data, the cross section limits seen in [@fig:resCombined] were obtained. It is quite obvious, that -the limits are already significantly lower than when only using the data of 2016. The extracted cross section limits are -the following: - - -: Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qW - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.05703 | 0.07999 | 0.04088 | 0.03366 | -| 1.8 | 0.03953 | 0.05576 | 0.02833 | 0.04319 | -| 2.0 | 0.02844 | 0.03989 | 0.02045 | 0.04755 | -| 2.5 | 0.01270 | 0.01781 | 0.00913 | 0.01519 | -| 3.0 | 0.00658 | 0.00923 | 0.00473 | 0.01218 | -| 3.5 | 0.00376 | 0.00529 | 0.00269 | 0.00474 | -| 4.0 | 0.00218 | 0.00309 | 0.00156 | 0.00114 | -| 4.5 | 0.00132 | 0.00188 | 0.00094 | 0.00068 | -| 5.0 | 0.00084 | 0.00122 | 0.00060 | 0.00059 | -| 6.0 | 0.00044 | 0.00066 | 0.00030 | 0.00041 | -| 7.0 | 0.00022 | 0.00036 | 0.00014 | 0.00043 | - - -: Cross Section limits using the combined data and the deep boosted tagger for the decay to qW - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.06656 | 0.09495 | 0.04698 | 0.12374 | -| 1.8 | 0.04281 | 0.06141 | 0.03001 | 0.05422 | -| 2.0 | 0.03297 | 0.04650 | 0.02363 | 0.04658 | -| 2.5 | 0.01328 | 0.01868 | 0.00950 | 0.01109 | -| 3.0 | 0.00650 | 0.00917 | 0.00464 | 0.00502 | -| 3.5 | 0.00338 | 0.00479 | 0.00241 | 0.00408 | -| 4.0 | 0.00182 | 0.00261 | 0.00129 | 0.00127 | -| 4.5 | 0.00107 | 0.00156 | 0.00074 | 0.00123 | -| 5.0 | 0.00068 | 0.00102 | 0.00046 | 0.00149 | -| 6.0 | 0.00038 | 0.00060 | 0.00024 | 0.00034 | -| 7.0 | 0.00021 | 0.00035 | 0.00013 | 0.00046 | - - - -: Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qZ - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.05125 | 0.07188 | 0.03667 | 0.02993 | -| 1.8 | 0.03547 | 0.04989 | 0.02551 | 0.03614 | -| 2.0 | 0.02523 | 0.03539 | 0.01815 | 0.04177 | -| 2.5 | 0.01059 | 0.01485 | 0.00761 | 0.01230 | -| 3.0 | 0.00576 | 0.00808 | 0.00412 | 0.01087 | -| 3.5 | 0.00327 | 0.00460 | 0.00234 | 0.00425 | -| 4.0 | 0.00190 | 0.00269 | 0.00136 | 0.00097 | -| 4.5 | 0.00119 | 0.00168 | 0.00084 | 0.00059 | -| 5.0 | 0.00077 | 0.00110 | 0.00054 | 0.00051 | -| 6.0 | 0.00039 | 0.00057 | 0.00026 | 0.00036 | -| 7.0 | 0.00019 | 0.00031 | 0.00013 | 0.00036 | - - -: Cross Section limits using the combined data and deep boosted tagger for the decay to qZ - -| Mass [TeV] | Exp. limit [pb] | Upper limit [pb] | Lower limit [pb] | Obs. limit [pb] | -|------------|-----------------|------------------|------------------|-----------------| -| 1.6 | 0.07719 | 0.10949 | 0.05467 | 0.14090 | -| 1.8 | 0.05297 | 0.07493 | 0.03752 | 0.06690 | -| 2.0 | 0.03875 | 0.05466 | 0.02768 | 0.05855 | -| 2.5 | 0.01512 | 0.02126 | 0.01080 | 0.01160 | -| 3.0 | 0.00773 | 0.01088 | 0.00554 | 0.00548 | -| 3.5 | 0.00400 | 0.00565 | 0.00285 | 0.00465 | -| 4.0 | 0.00211 | 0.00301 | 0.00149 | 0.00152 | -| 4.5 | 0.00118 | 0.00172 | 0.00082 | 0.00128 | -| 5.0 | 0.00073 | 0.00108 | 0.00050 | 0.00161 | -| 6.0 | 0.00039 | 0.00060 | 0.00025 | 0.00036 | -| 7.0 | 0.00021 | 0.00034 | 0.00013 | 0.00045 | +## Combined dataset +Using the combined data, the cross section limits seen in [@fig:resCombined] were obtained. The cross section limits +are, compared to only using the 2016 dataset, almost cut in half. This shows the big improvement achieved by using more +than three times the amount of data. The results for the mass limits of the combined years are as follows: @@ -933,6 +851,9 @@ The results for the mass limits of the combined years are as follows: | qZ | deep boosted | 4.92 | 5.02 | 4.80 | +The combination of the three years not just improved the cross section limits, but also the limit for the mass of the +q\* particle. The final result is 1 TeV higher for the decay to qW and almost 0.8 TeV higher for the decay to qZ than +what was concluded by the previous research [@PREV_RESEARCH]. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -952,8 +873,6 @@ deep boosted tagger (right).} \label{fig:resCombined} \end{figure} -The combination of the three years has a big impact on the result. The final limit is 1 TeV higher than what could -previously be concluded. ## Comparison of taggers @@ -961,9 +880,19 @@ The previously shown results already show, that the deep boosted tagger was not results compared to the N-subjettiness tagger. For further comparison, in [@fig:limit_comp] the expected limits of the different taggers for the q\* $\rightarrow$ qW and the q\* $\rightarrow$ qZ decay are shown. It can be seen, that the deep boosted is at best as good as the -N-subjettiness tagger. This was not the expected result, as the deep neural network was supposed to provide better -separation between signal and background events than the older N-subjettiness tagger. Recently, some issues with the -training of the deep boosted tagger used in this analysis were found, so those might explain the bad performance. +N-subjettiness tagger. This was not the expected result, as the deep neural network was already found to provide a +higher significance in the optimisation done in [@sec:opt]. The higher significance should also result in lower cross +section limits. Apparently, doing the optimization only on data of the year 2018, was not the best choice. To make sure, +there is no mistake in the setup, also the expected cross section limits using only the high purity category of the two +taggers with 2018 data are compared in [@fig:comp_2018]. There, the cross section limits calculated using the deep +boosted tagger are a bit lower than with the N-subjettiness tagger, showing, that the method used for optimisation was +working but should have been applied to the combined dataset. + +Recently, some issues with the training of the deep boosted tagger used in this analysis were also found, which might +explain, why it didn't perform much better in general. + +![Comparision of deep boosted and N-subjettiness tagger in the high purity category using the data from year 2018. +](./figures/limit_comp_2018.pdf){#fig:comp_2018} \begin{figure} \begin{minipage}{0.5\textwidth} @@ -977,25 +906,47 @@ decay to qZ} \label{fig:limit_comp} \end{figure} +\clearpage \newpage # Summary In this thesis, a limit on the mass of the q\* particle has been successfully established. By combining the data from the years 2016, 2017 and 2018, collected by the CMS experiment, the previously set limit could be significantly -improved. For that, a combined fit to the QCD background and signal had to be performed and the cross section limits -extracted. Also, the new deep boosted tagger, using a deep neural network, was compared to the older N-subjettiness -tagger and found to not significantly change the result, neither to the better nor to the worse. Due to some training -issues identified lately, there is still a good chance, that, with that issue fixed, it will be able to further improve -the results. -Also previously research of the 2016 data was repeated and the results compared. The previous research arrived at a -exclusion limit up to 5 TeV resp. 4.7 TeV for the decay to qW resp. qZ, this thesis at 5.4 TeV resp. 4.9 TeV. The -difference can be explained by small differences in the data used and the setup itself. After that, using the combined -data, the limit could be significantly improved to exclude the q\* particle up to a mass of 6.2 TeV resp. 5.5 TeV. -With the research presented in this thesis, it would also be possible to test other theories of the q\* particle that -predict its existence at lower masses, than the one used, by overlaying the different theory curves in the plots shown -in [@fig:res2016] and [@fig:resCombined]. +improved. + +For the data analysis, the following selection was applied: + +- #jets >= 2 +- $\Delta\eta < 1.4$ +- $m_{jj} >= \SI{1050}{\giga\eV}$ +- $\SI{35}{\giga\eV} < m_{SDM} < \SI{105}{\giga\eV}$ + +For the deep boosted tagger, a high purity category of $VvsQCD > 0.95$ and a low purity category of $VvsQCD <= 0.95$ was +used. For the N-subjettiness tagger the high purity category was $\tau_{21} < 0.35$ and the low purity category $0.35 < +\tau_{21} < 0.75$. These values were found by optimizing for the highest possible significance of the signal. + +After the selection, the cross section limits were extracted from the data and new exclusion limits for the mass of the +q\* particles set. These are 6.1 TeV by analyzing the decay to qW, respectively 5.5 TeV for the decay to qZ. Those +limits are about 1 TeV higher than the ones found in previous research, that found them to be 5 TeV resp. 4.7 TeV. + +Two different taggers were used to compare the result. The newer deep boosted tagger was found to not improve the result +over the older N-subjettiness tagger. This was rather unexpected but might be caused by some training issues, that were +identified lately. + +This research can also be used to test other theories of the q\* particle that predict its existence at lower masses, +than the one used, by overlaying the different theory curves in the plots shown in [@fig:res2016] and +[@fig:resCombined]. + +The optimization process used to find the optimal values for the discriminant provided by the taggers, was found to not +be optimal. It was only done using 2018 data, with which the deep boosted tagger showed a higher significance than the +N-subjettiness tagger. Apparently, the assumption, that the same optimization would apply to the data of the other years +as well, did not hold. Using the combined dataset, the deep boosted tagger showed no better cross section limits than +the N-subjettiness tagger, which are directly related to the significance used for the optimization. Therefore, with a +better optimization and the fixed training issues of the deep boosted tagger, it is very likely, that the result +presented could be further improved. \newpage \nocite{*} + diff --git a/thesis.pdf b/thesis.pdf index 60bc519..4521e50 100644 Binary files a/thesis.pdf and b/thesis.pdf differ diff --git a/thesis.tex b/thesis.tex index 9c592f8..655ae68 100644 --- a/thesis.tex +++ b/thesis.tex @@ -78,7 +78,6 @@ \usepackage{tikz-feynman} \usepackage{csquotes} \pagenumbering{gobble} -\setlength{\parindent}{1.0em} \setlength{\parskip}{0.5em} \bibliographystyle{lucas_unsrt} \makeatletter @@ -138,39 +137,48 @@ show that it isn't yet a full \enquote{theory of everything}. To solve these shortcomings, lots of theories beyond the standard model exist that try to explain some of them. -One category of such theories is based on a composite quark model. They -predict that quarks consist of particles unknown to us so far or can -bind to other particles using unknown forces. This could explain some -symmetries between particles and reduce the number of constants needed -to explain the properties of the known particles. One common prediction -of those theories are excited quark states. Those are quark states of -higher energy that can decay to an unexcited quark under the emission of -a boson. These decays are the topic of this thesis. +One category of such theories is based on a composite quark model. +Quarks are currently considered elementary particles by the Standard +Model. The composite quark models on the other hand predict that quarks +consist of particles unknown to us so far or can bind to other particles +using unknown forces. This could explain the symmetries between +particles and reduce the number of constants needed to explain the +properties of the known particles. One common prediction of those +theories are excited quark states. Those are quark states of higher +energy that can decay to an unexcited quark under the emission of a +boson. This thesis will search for their decay to a quark and a W/Z +boson. The W/Z boson then decays in the hadronic channel, to two more +quarks. The endstate of this decay has only quarks, making Quantum +Chromodynamics effects the main background. -In previous research, a lower limit for the mass of an excited quark has -already been set using data from the 2016 run of the Large Hadron -Collider with an integrated luminosity of +In a previous research \autocite{PREV_RESEARCH}, a lower limit for the +mass of an excited quark has already been set using data from the 2016 +run of the Large Hadron Collider with an integrated luminosity of \(\SI{35.92}{\per\femto\barn}\). Since then, a lot more data has been -collected, totalling to \(\SI{137.19}{\per\femto\barn}\). This thesis -uses this new data as well as a new technique to identify decays of -highly boosted particles based on a deep neural network to further -improve this limit and therefore exclude the excited quark particle to -even higher masses. It will also compare this new tagging technique to -an older tagger based on jet substructure studies used in the previous +collected, totalling to \(\SI{137.19}{\per\femto\barn}\) of data usable +for research. This thesis uses this new data as well as a new technique +to identify decays of highly boosted particles based on a deep neural +network. By using more data and new tagging techniques, it aims to +either confirm the existence of the q* particle or improve the +previously set lower limit of 5 TeV respectively 4.7 TeV for the decay +to qW respectively qZ on its mass to even higher values. It will also +directly compare the performance of this new tagging technique to an +older tagger based on jet substructure studies used in the previous research. -First, a theoretical background will be presented explaining in short -the Standard Model, its shortcomings and the theory of excited quarks. -Then the Large Hadron Collider and the Compact Muon Solenoid, the -detector that collected the data for this analysis, will be described. -After that, the main analysis part follows, describing how the data was -used to extract limits on the mass of the excited quark particle. At the -very end, the results are presented and compared to previous research. +In chapter 2, a theoretical background will be presented explaining in +short the Standard Model, its shortcomings and the theory of excited +quarks. Then, in chapter 3, the Large Hadron Collider and the Compact +Muon Solenoid, the detector that collected the data for this analysis, +will be described. After that, in chapters 4-7, the main analysis part +follows, describing how the data was used to extract limits on the mass +of the excited quark particle. At the very end, in chapter 8, the +results are presented and compared to previous research. \newpage -\hypertarget{theoretical-background}{% -\section{Theoretical background}\label{theoretical-background}} +\hypertarget{theoretical-motivation}{% +\section{Theoretical motivation}\label{theoretical-motivation}} This chapter presents a short summary of the theoretical background relevant to this thesis. It first gives an introduction to the standard @@ -178,10 +186,10 @@ model itself and some of the issues it raises. It then goes on to explain the background processes of quantum chromodynamics and the theory of q*, which will be the main topic of this thesis. -\hypertarget{standard-model}{% -\subsection{Standard model}\label{standard-model}} +\hypertarget{sec:sm}{% +\subsection{Standard model}\label{sec:sm}} -The Standard Model of physics proofed very successful in describing +The Standard Model of physics proved to be very successful in describing three of the four fundamental interactions currently known: the electromagnetic, weak and strong interaction. The fourth, gravity, could not yet be successfully included in this theory. @@ -189,16 +197,22 @@ not yet be successfully included in this theory. The Standard Model divides all particles into spin-\(\frac{n}{2}\) fermions and spin-n bosons, where n could be any integer but so far is only known to be one for fermions and either one (gauge bosons) or zero -(scalar bosons) for bosons. The fermions are further divided into quarks -and leptons. Each of those exists in six so called flavours. -Furthermore, quarks and leptons can also be divided into three -generations, each of which contains two particles. In the lepton -category, each generation has one charged lepton and one neutrino, that -has no charge. Also, the mass of the neutrinos is not yet known, only an -upper bound has been established. A full list of particles known to the -standard model can be found in fig.~\ref{fig:sm}. Furthermore, all -fermions have an associated anti particle with reversed charge. Multiple -quarks can form bound states called hadrons (e.g.~proton and neutron). +(scalar bosons) for bosons. Fermions are further classified into quarks +and leptons. Quarks and leptons can also be categorized into three +generations, each of which contains two particles, also called flavours. +For leptons, the three generations each consist of a lepton and its +corresponding neutrino, namely first the electron, then the muon and +third, the tau. The three quark generations consist of first, the up and +down, second, the charm and strange, and third, the top and bottom +quark. So overall, their exists a total of six quark and six lepton +flavours. A full list of particles known to the standard model can be +found in fig.~\ref{fig:sm}. Furthermore, all fermions have an associated +anti particle with reversed charge. + +The matter around us, is built from so called hadrons, that are bound +states of quarks, for example protons and neutrons. Long lived hadrons +consist of up and down quarks, as the heavier ones over time decay to +those. \begin{figure} \hypertarget{fig:sm}{% @@ -243,59 +257,40 @@ Cabibbo-Kobayashi-Maskawa matrix: matrix element \(V_{ij}\). It is easy to see, that the change of flavour in the same generation is way more likely than any other flavour change. -The quantum chromodynamics (QCD) describe the strong interaction of +Due to their high masses of 80.39 GeV resp. 91.19 GeV, the \(W^\pm\) and +\(Z^0\) bosons themselves decay very quickly. Either in the leptonic or +hadronic decay channel. In the leptonic channel, the \(W^\pm\) decays to +a lepton and the corresponding anti-lepton neutrino, in the hadronic +channel it decays to a quark and an anti-quark of a different flavour. +Due to the \(Z^0\) boson having no charge, it always decays to a fermion +and its anti-particle, in the leptonic channel this might be for example +a electron - positron pair, in the hadronic channel an up and anti-up +quark pair. This thesis examines the hadronic decay channel, where both +vector bosons essentially decay to to quarks. + +The quantum chromodynamics (QCD) describes the strong interaction of particles. It applies to all particles carrying colour (e.g.~quarks). -The force is mediated by the gluon. This boson carries colour as well, -although it doesn't carry only one colour but rather a combination of a -colour and an anticolour, and can therefore interact with itself and -exists in eight different variant. As a result of this, processes, where -a gluon decays into two gluons are possible. Furthermore the strong -force, binding to colour carrying particles, increases with their -distance r making it at a certain point more energetically efficient to -form a new quark - antiquark pair than separating the two particles even -further. This effect is known as colour confinement. Due to this effect, -colour carrying particles can't be observed directly, but rather form so -called jets that cause hadronic showers in the detector. An effect -called Hadronisation. - -\hypertarget{sec:qcdbg}{% -\subsubsection{Quantum Chromodynamic background}\label{sec:qcdbg}} - -In this thesis, a decay with two jets in the endstate will be analysed. -Therefore it will be hard to distinguish the signal processes from QCD -effects. Those can also produce two jets in the endstate, as can be seen -in fig.~\ref{fig:qcdfeynman}. They are also happening very often in a -proton proton collision, as it is happening in the Large Hadron -Collider. This is caused by the structure of the proton. It not only -consists of three quarks, called valence quarks, but also of a lot of -quark-antiquark pairs connected by gluons, called the sea quarks, that -exist due to the self interaction of the gluons binding the three -valence quarks. Therefore in a proton - proton collision, interactions -of gluons and quarks are the main processes causing a very strong QCD -background. - -\begin{figure} -\centering -\feynmandiagram [horizontal=v1 to v2] { - q1 [particle=\(q\)] -- [fermion] v1 -- [gluon] g1 [particle=\(g\)], - v1 -- [gluon] v2, - q2 [particle=\(q\)] -- [fermion] v2 -- [gluon] g2 [particle=\(g\)], -}; -\feynmandiagram [horizontal=v1 to v2] { - g1 [particle=\(g\)] -- [gluon] v1 -- [gluon] g2 [particle=\(g\)], - v1 -- [gluon] v2, - g3 [particle=\(g\)] -- [gluon] v2 -- [gluon] g4 [particle=\(g\)], -}; -\caption{Two examples of QCD processes resulting in two jets.} \label{fig:qcdfeynman} -\end{figure} +The force is mediated by gluons. These bosons carry colour as well, +although they don't carry only one colour but rather a combination of a +colour and an anticolour, and can therefore interact with themselves and +exist in eight different variants. As a result of this, processes, where +a gluon decays into two gluons are possible. Furthermore the strength of +the strong force, binding to colour carrying particles, increases with +their distance making it at a certain point more energetically efficient +to form a new quark - antiquark pair than separating the two particles +even further. This effect is known as colour confinement. Due to this +effect, colour carrying particles can't be observed directly, but rather +form so called jets that cause hadronic showers in the detector. Those +jets are cone like structures made of hadrons and other particles. The +effect is called Hadronisation. \hypertarget{shortcomings-of-the-standard-model}{% \subsubsection{Shortcomings of the Standard Model}\label{shortcomings-of-the-standard-model}} -While being very successful in describing mostly all of the effects we -can observe in particle colliders so far, the Standard Model still has -several shortcomings. +While being very successful in describing the effects observed in +particle colliders or the particles reaching earth from cosmological +sources, the Standard Model still has several shortcomings. \begin{itemize} \tightlist @@ -307,7 +302,7 @@ several shortcomings. galaxies can't be explained by the known matter. Dark matter currently is our best theory to explain those. \item - \textbf{Matter-antimatter assymetry}: The amount of matter vastly + \textbf{Matter-antimatter asymmetry}: The amount of matter vastly outweights the amount of antimatter in the observable universe. This can't be explained by the standard model, which predicts a similar amount of matter and antimatter. @@ -326,17 +321,18 @@ several shortcomings. \hypertarget{sec:qs}{% \subsection{Excited quark states}\label{sec:qs}} -One category of theories that try to solve some of the shortcomings of -the standard model are the composite quark models. Those state, that -quarks consist of some particles unknown to us so far. This could -explain the symmetries between the different fermions. A common +One category of theories that try to explain the symmetries between +particles of the standard model are the composite quark models. Those +state, that quarks consist of some particles unknown to us so far. This +could explain the symmetries between the different fermions. A common prediction of those models are excited quark states (q*, q**, q***\ldots). Similar to atoms, that can be excited by the absorption of a photon and can then decay again under emission of a photon with an energy corresponding to the excited state, those excited quark states -could decay under the emission of some boson. Quarks are smaller than -\(10^{-18}\) m, due to that, excited states have to be of very high -energy. That will cause the emitted boson to be highly boosted. +could decay under the emission of any boson. Quarks are smaller than +\(10^{-18}\) m. This corresponds to an energy scale of approximately 1 +TeV. Therefore the excited quark states are expected to be in that +region. That will cause the emitted boson to be highly boosted. \begin{figure} \centering @@ -353,20 +349,92 @@ decaying to two quarks.} \label{fig:qsfeynman} This thesis will search data collected by the CMS in the years 2016, 2017 and 2018 for the single excited quark state q* which can decay to a quark and any boson. An example of a q* decaying to a quark and a W -boson can be seen in fig.~\ref{fig:qsfeynman}. The boson quickly further -decays into for example two quarks. Because the boson is highly boosted, -those will be very close together and therefore appear to the detector -as only one jet. This means that the decay of a q* particle will have -two jets in the endstate (assuming the W/Z boson decays to two quarks) -and will therefore be hard to distinguish from the QCD background -described in sec.~\ref{sec:qcdbg}. +boson can be seen in fig.~\ref{fig:qsfeynman}. As explained in +sec.~\ref{sec:sm}, the vector boson can then decay either in the +hadronic or leptonic decay channel. This research investigates only the +hadronic channel with two quarks in the endstate. Because the boson is +highly boosted, those will be very close together and therefore appear +to the detector as only one jet. This means that the decay of a q* +particle will have two jets in the endstate (assuming the W/Z boson +decays to two quarks) and will therefore be hard to distinguish from the +QCD background described in sec.~\ref{sec:qcdbg}. + +The choice of only examining the decay of the q* particle to the vector +bosons is motivated by the branching ratios calculated for the decay +\autocite{QSTAR_THEORY}: + +\begin{longtable}[]{@{}llll@{}} +\caption{Branching ratios of the decaying q* particle.}\tabularnewline +\toprule +decay mode & br. ratio {[}\%{]} & decay mode & br. ratio +{[}\%{]}\tabularnewline +\midrule +\endfirsthead +\toprule +decay mode & br. ratio {[}\%{]} & decay mode & br. ratio +{[}\%{]}\tabularnewline +\midrule +\endhead +\(U^* \rightarrow ug\) & 83.4 & \(D^* \rightarrow dg\) & +83.4\tabularnewline +\(U^* \rightarrow dW\) & 10.9 & \(D^* \rightarrow uW\) & +10.9\tabularnewline +\(U^* \rightarrow u\gamma\) & 2.2 & \(D^* \rightarrow d\gamma\) & +0.5\tabularnewline +\(U^* \rightarrow uZ\) & 3.5 & \(D^* \rightarrow dZ\) & +5.1\tabularnewline +\bottomrule +\end{longtable} + +The decay to the vector bosons have the second highest branching ratio. +The decay to a gluon and a quark is the dominant decay, but virtually +impossible to distinguish from the QCD background described in the next +section. This makes the decay to the vector bosons the obvious choice. To reconstruct the mass of the q* particle from an event successfully -recognized to be the decay of such a particle, the dijet invariant mass, -the mass of the two jets in the final state, can be calculated by adding -their four momenta, vectors consisting of the energy and momentum of a -particle, together. From the four momentum it's easy to derive the mass -by solving \(E=\sqrt{p^2 + m^2}\) for m. +recognized to be the decay of such a particle, the dijet invariant mass +has to be calculated. This can be achieved by adding their four momenta, +vectors consisting of the energy and momentum of a particle, together. +From the four momentum it's easy to derive the mass by solving +\(E=\sqrt{p^2 + m^2}\) for m. + +This theory has already been investigated in \autocite{PREV_RESEARCH} +analysing data recorded by CMS in 2016, excluding the q* particle up to +a mass of 5 TeV resp. 4.7 TeV for the decay to qW resp. qZ analysing the +hadronic decay of the vector boson. This thesis aims to either exclude +the particle to higher masses or find a resonance showing its existence +using the higher center of mass energy of the LHC as well as more data +that is available now. + +\hypertarget{sec:qcdbg}{% +\subsubsection{Quantum Chromodynamic background}\label{sec:qcdbg}} + +In this thesis, a decay with two jets in the endstate will be analysed. +Therefore it will be hard to distinguish the signal processes from QCD +effects. Those can also produce two jets in the endstate, as can be seen +in fig.~\ref{fig:qcdfeynman}. They are also happening very often in a +proton proton collision, as it is happening in the Large Hadron +Collider. This is caused by the structure of the proton. It not only +consists of three quarks, called valence quarks, but also of a lot of +quark-antiquark pairs connected by gluons, called the sea quarks, that +exist due to the self interaction of the gluons binding the three +valence quarks. Therefore the QCD multijet backgroubd is the dominant +background of the signal described in sec.~\ref{sec:qs}. + +\begin{figure} +\centering +\feynmandiagram [horizontal=v1 to v2] { + q1 [particle=\(q\)] -- [fermion] v1 -- [gluon] g1 [particle=\(g\)], + v1 -- [gluon] v2, + q2 [particle=\(q\)] -- [fermion] v2 -- [gluon] g2 [particle=\(g\)], +}; +\feynmandiagram [horizontal=v1 to v2] { + g1 [particle=\(g\)] -- [gluon] v1 -- [gluon] g2 [particle=\(g\)], + v1 -- [gluon] v2, + g3 [particle=\(g\)] -- [gluon] v2 -- [gluon] g4 [particle=\(g\)], +}; +\caption{Two examples of QCD processes resulting in two jets.} \label{fig:qcdfeynman} +\end{figure} \newpage @@ -381,7 +449,8 @@ this thesis will be described. The Large Hadron Collider is the world's largest and most powerful particle accelerator \autocite{website}. It has a perimeter of 27 km and -can collide protons at a centre of mass energy of 13 TeV. It is home to +can accelerate two beams of protons to an energy of 6.5 TeV resulting in +a collision with a centre of mass energy of 13 TeV. It is home to several experiments, the biggest of those are ATLAS and the Compact Muon Solenoid (CMS). Both are general-purpose detectors to investigate the particles that form during particle collisions. @@ -413,8 +482,8 @@ LHC, the integrated luminosity is introduced as \(L_{int} = \int L dt\). \hypertarget{compact-muon-solenoid}{% \subsection{Compact Muon Solenoid}\label{compact-muon-solenoid}} -The data used in this thesis was captured by the Compact Muon Solenoid -(CMS). It is one of the biggest experiments at the Large Hadron +The data used in this thesis was recorded by the Compact Muon Solenoid +(CMS). It is one of the four main experiments at the Large Hadron Collider. It can detect all elementary particles of the standard model except neutrinos. For that, it has an onion like setup. The particles produced in a collision first go through a tracking system. They then @@ -431,17 +500,20 @@ three years has a total integrated luminosity of \hypertarget{coordinate-conventions}{% \subsubsection{Coordinate conventions}\label{coordinate-conventions}} -Per convention, the z axis points along the beam axis, the y axis -upwards and the x axis horizontal towards the LHC centre. Furthermore, -the azimuthal angle \(\phi\), which describes the angle in the x - y -plane, the polar angle \(\theta\), which describes the angle in the y - -z plane and the pseudorapidity \(\eta\), which is defined as -\(\eta = -ln\left(tan\frac{\theta}{2}\right)\) are introduced. The +Per convention, the z axis points along the beam axis in the direction +of the magnetic fields of the solenoid, the y axis upwards and the x +axis horizontal towards the LHC centre. The azimuthal angle \(\phi\), +which describes the angle in the x - y plane, the polar angle +\(\theta\), which describes the angle in the y - z plane and the +pseudorapidity \(\eta\), which is defined as +\(\eta = -ln\left(tan\frac{\theta}{2}\right)\) are also introduced. The coordinates are visualised in fig.~\ref{fig:cmscoords}. Furthermore, to -describe a particles momentum, often the transverse momentum, \(p_t\) is -used. It is the component of the momentum transversal to the beam axis. -It is a useful quantity, because the sum of all transverse momenta has -to be zero. Missing transverse momentum implies particles that weren't +describe a particle's momentum, often the transverse momentum, \(p_t\) +is used. It is the component of the momentum transversal to the beam +axis. Before the collision, the transverse momentum obviously has to be +zero, therefore, due to conservation of energy, the sum of all +transverse momenta after the collision has to be zero, too. If this is +not the case for the detected events, it implies particles that weren't detected such as neutrinos. \begin{figure} @@ -457,24 +529,25 @@ https://inspirehep.net/record/1236817/plots}\label{fig:cmscoords} \hypertarget{the-tracking-system}{% \subsubsection{The tracking system}\label{the-tracking-system}} -The tracking system is built of two parts, first a pixel detector and -then silicon strip sensors. It is used to reconstruct the tracks of -charged particles, measuring their charge sign, direction and momentum. -It is as close to the collision as possible to be able to identify -secondary vertices. +The tracking system is built of two parts, closest to the collision is a +pixel detector and around that silicon strip sensors. They are used to +reconstruct the tracks of charged particles, measuring their charge +sign, direction and momentum. They are as close to the collision as +possible to be able to identify secondary vertices. \hypertarget{the-electromagnetic-calorimeter}{% \subsubsection{The electromagnetic calorimeter}\label{the-electromagnetic-calorimeter}} The electromagnetic calorimeter measures the energy of photons and -electrons. It is made of tungstate crystal. When passed by particles, it -produces light in proportion to the particle's energy. This light is -measured by photodetectors that convert this scintillation light to an -electrical signal. To measure a particles energy, it has to leave its -whole energy in the ECAL, which is true for photons and electrons, but -not for other particles such as hadrons and muons. They too leave some -energy in the ECAL. +electrons. It is made of tungstate crystal and photodetectors. When +passed by particles, the crystal produces light in proportion to the +particle's energy. This light is measured by the photodetectors that +convert this scintillation light to an electrical signal. To measure a +particles energy, it has to leave its whole energy in the ECAL, which is +true for photons and electrons, but not for other particles such as +hadrons and muons. Those have are of higher energy and therefore only +leave some energy in the ECAL but are not stopped by it. \hypertarget{the-hadronic-calorimeter}{% \subsubsection{The hadronic @@ -549,12 +622,13 @@ several other clustering algorithms, namely the \(k_t\), Cambridge/Aachen and SISCone clustering algorithms. The anti-\(k_t\) clustering algorithm associates hard particles with -their soft particles surrounding them within a radius R in the \(\eta\) -- \(\phi\) plane forming cone like jets. If two jets overlap, the jets -shape is changed according to its hardness. A softer particles jet will -change its shape more than a harder particles. A visual comparison of -four different clustering algorithms can be seen in -fig.~\ref{fig:antiktcomparision}. For this analysis, a radius of 0.8 is +their soft particles surrounding them within a radius +\(R = \sqrt{\eta^2 - \phi^2}\) in the \(\eta\) - \(\phi\) plane forming +cone like jets. If two jets overlap, the jets shape is changed according +to its hardness in regards to the transverse momentum. A softer +particles jet will change its shape more than a harder particles. A +visual comparison of four different clustering algorithms can be seen in +fig.~\ref{fig:antiktcomparison}. For this analysis, a radius of 0.8 is used. Furthermore, to approximate the mass of a heavy particle that caused a @@ -566,85 +640,60 @@ the mass of a particle causing a jet than taking the mass of all constituent particles of the jet combined. \begin{figure} -\hypertarget{fig:antiktcomparision}{% +\hypertarget{fig:antiktcomparison}{% \centering \includegraphics{./figures/antikt-comparision.png} -\caption{Comparision of the \(k_t\), Cambridge/Aachen, SISCone and +\caption{Comparison of the \(k_t\), Cambridge/Aachen, SISCone and anti-\(k_t\) algorithms clustering a sample parton-level event with many -random soft \enquote{ghosts}. Taken from}\label{fig:antiktcomparision} +random soft \enquote{ghosts}. Taken from +\autocite{ANTIKT}}\label{fig:antiktcomparison} } \end{figure} +fig.~\ref{fig:antiktcomparison} clearly shows, that the jets +reconstructed using the anti-\(k_t\) algorithm are closest to having a +cone like shape and are so fucking beautiful. + \newpage -\hypertarget{method-of-analysis}{% -\section{Method of analysis}\label{method-of-analysis}} +\hypertarget{sec:moa}{% +\section{Method of analysis}\label{sec:moa}} This section gives an overview over how the data gathered by the LHC and CMS is going to be analysed to be able to either exclude the q* particle to even higher masses than already done or maybe confirm its existence. -As described in sec.~\ref{sec:qs}, an excited quark q* can decay to a -quark and any boson. The branching ratios are calculated to be as -follows \autocite{QSTAR_THEORY}: - -\begin{longtable}[]{@{}llll@{}} -\caption{Branching ratios of the decaying q* particle.}\tabularnewline -\toprule -decay mode & br. ratio {[}\%{]} & decay mode & br. ratio -{[}\%{]}\tabularnewline -\midrule -\endfirsthead -\toprule -decay mode & br. ratio {[}\%{]} & decay mode & br. ratio -{[}\%{]}\tabularnewline -\midrule -\endhead -\(U^* \rightarrow ug\) & 83.4 & \(D^* \rightarrow dg\) & -83.4\tabularnewline -\(U^* \rightarrow dW\) & 10.9 & \(D^* \rightarrow uW\) & -10.9\tabularnewline -\(U^* \rightarrow u\gamma\) & 2.2 & \(D^* \rightarrow d\gamma\) & -0.5\tabularnewline -\(U^* \rightarrow uZ\) & 3.5 & \(D^* \rightarrow dZ\) & -5.1\tabularnewline -\bottomrule -\end{longtable} - -The majority of excited quarks will decay to a quark and a gluon, but as -this is virtually impossible to distinguish from QCD effects (for -example from the qg \(\rightarrow\) qg processes), this analysis will -focus on the processes q* \(\rightarrow\) qW and q* \(\rightarrow\) qZ. -In this case, due to jet substructure studies, it is possible to -establish a discriminator between QCD background and jets originating in -a W/Z decay. They still make up roughly 20 \% of the signal events to -study and therefore seem like a good choice. +As described in sec.~\ref{sec:qs}, the decay of the q* particle to a +quark and a vector boson with the vector boson then decaying +hadronically will be investigated. This is the second most probable +decay of the q* particle and easier to analyse than the dominant decay +to a quark and a gluon. Therefore it is a good choice for this research. The data studied was collected by the CMS experiment in the years 2016, 2017 and 2018. It is analysed with the Particle Flow algorithm to reconstruct jets and all the other particles forming during the collision. The jets are then clustered using the anti-\(k_t\) algorithm -with the distance parameter R being 0.8. Furthermore, the calorimeters -of the CMS detector have to be calibrated. For that, jet energy -corrections published by the CMS working group are applied to the data. +with the distance parameter R being 0.8. -To find signal events in the data, this thesis looks at the dijet -invariant mass distribution. The data is assumed to only consist of QCD -background and signal events, other backgrounds are neglected. Cuts on -several distributions are introduced to reduce the background and -improve the sensitivity for the signal. If the q* particle exists, the -dijet invariant mass distribution should show a resonance at its -invariant mass. This resonance will be looked for with statistical -methods explained later on. +To find the signal events, described in sec.~\ref{sec:qs}, in the data, +this thesis looks at the dijet invariant mass distribution. The only +background considered is the QCD background described in +sec.~\ref{sec:qcdbg}. A selection using different kinematic variables as +well as a tagger to identify jets from the decay of a vector boson is +introduced to reduce the background and increase the sensitivity for the +signal. After that, it will be looked for a peak in the dijet invariant +mass distribution at the resonance mass of the q* particle. The analysis will be conducted with two different sets of data. First, only the data collected by CMS in 2016 will be used to compare the results to the previous analysis \autocite{PREV_RESEARCH}. Then the combined data from 2016, 2017 and 2018 will be used to improve the previously set limits for the mass of the q* particle. Also, two -different tagging mechanisms will be used. One based on the -N-subjettiness variable used in the previous research, the other being a -novel approach using a deep neural network. +different V-tagging mechanisms will be used to compare their +performance. One based on the N-subjettiness variable used in the +previous research \autocite{PREV_RESEARCH}, the other being a novel +approach using a deep neural network, that will be explained in the +following. \hypertarget{signal-and-background-modelling}{% \subsection{Signal and Background @@ -688,14 +737,19 @@ six parameters: and right tail \end{itemize} -A gaussian and a poisson have also been studied but found to not fit the -signal sample very well as they aren't able to fit the tail on both -sides of the peak. +A gaussian and a poisson function have also been studied but found to be +not able to reproduce the signal shape as they couldn't model the tails +on both sides of the peak. An example of a fit of these functions to a toy dataset with gaussian errors can be seen in fig.~\ref{fig:cb_fit}. In this figure, a binning of 200 GeV is used. For the actual analysis a 1 GeV binning will be -used. +used. It can be seen that the fit works very well and therefore confirms +the functions chosen to model signal and background. This is supported +by a \(\chi^2 /\) ndof of 0.5 and a found mean for the signal at 2999 +\(\pm\) 23 \(\si{\giga\eV}\) which is extremely close to the expected +3000 GeV mean. Those numbers clearly show that the method in use is able +to successfully describe the data. \begin{figure} \hypertarget{fig:cb_fit}{% @@ -713,15 +767,16 @@ TeV.}\label{fig:cb_fit} \section{Preselection and data quality}\label{preselection-and-data-quality}} -To separate the background from the signal, cuts on several -distributions have to be introduced. The selection of events is divided -into two parts. The first one (the preselection) adds some general -physics motivated cuts and is also used to make sure a good trigger -efficiency is achieved. It is not expected to already provide a good -separation of background and signal. In the second part, different -taggers will be used as a discriminator between QCD background and -signal events. After the preselection, it is made sure, that the -simulated samples represent the real data well. +To reduce the background and increase the signal sensitivity, a +selection of events by different variables is introduced. It is divided +into two stages. The first one (the preselection) adds some general +physics motivated selection using kinematic variables and is also used +to make sure a good trigger efficiency is achieved. In the second part, +different taggers will be used as a discriminator between QCD background +and signal events. After the preselection, it is made sure, that the +simulated samples represent the real data well by comparing the data +with the simulation in the signal as well as a sideband region, where no +signal events are expected. \hypertarget{preselection}{% \subsection{Preselection}\label{preselection}} @@ -729,16 +784,20 @@ simulated samples represent the real data well. First, all events are cleaned of jets with a \(p_t < \SI{200}{\giga\eV}\) and a pseudorapidity \(|\eta| > 2.4\). This is to discard soft background and to make sure the particles are in the -barrel region of the detector for an optimal detector resolution. +barrel region of the detector for an optimal track reconstruction. Furthermore, all events with one of the two highest \(p_t\) jets having an angular separation smaller than 0.8 from any electron or muon are discarded to allow future use of the results in studies of the semi or all-leptonic decay channels. -From a decaying q* particle, we expect two jets in the endstate. -Therefore a cut is added to have at least 2 jets. More jets are also -possible, for example caused by gluon radiation of a quark causing -another jet. The cut can be seen in fig.~\ref{fig:njets}. +From a decaying q* particle, we expect two jets in the endstate. The +dijet invariant mass of those two jets will be used to reconstruct the +mass of the q* particle. Therefore a cut is added to have at least 2 +jets. More jets are also possible, for example caused by gluon radiation +of a quark causing another jet. If this is the case, the two jets with +the highest \(p_t\) are used for the reconstruction of the q* mass. The +distributions of the number of jets before and after the selection can +be seen in fig.~\ref{fig:njets}. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -759,14 +818,18 @@ are amplified by a factor of 10,000, to be visible.} \label{fig:njets} \end{figure} -Another cut is on \(\Delta\eta\). The q* particle is expected to be very -heavy in regards to the center of mass energy of the collision and will -therefore be almost stationary. Its decay products should therefore be -close to back to back, which means the \(\Delta\eta\) distribution is -expected to peak at 0. At the same time, particles originating from QCD -effects are expected to have a higher \(\Delta\eta\) as they mainly form -from less heavy resonances. To maintain comparability, the same cut as -in previous research of \(\Delta\eta \le 1.3\) is used as can be seen in +The next selection is done using \(\Delta\eta = |\eta_1 - \eta_2|\), +with \(\eta_1\) and \(\eta_2\) being the \(\eta\) of the first two jets +in regards to their transverse momentum. The q* particle is expected to +be very heavy in regards to the center of mass energy of the collision +and will therefore be almost stationary. Its decay products should +therefore be close to back to back, which means the \(\Delta\eta\) +distribution is expected to peak at 0. At the same time, particles +originating from QCD effects are expected to have a higher +\(\Delta\eta\) as they mainly form from less heavy resonances. To +maintain comparability, the same selection as in previous research of +\(\Delta\eta \le 1.3\) is used. A comparison of the \(\Delta\eta\) +distribution before and after the selection can be seen in fig.~\ref{fig:deta}. \begin{figure} @@ -788,14 +851,14 @@ are amplified by a factor of 10,000, to be visible.} \label{fig:deta} \end{figure} -The last cut in the preselection is on the dijet invariant mass: +The last selection in the preselection is on the dijet invariant mass: \(m_{jj} \ge \SI{1050}{\giga\eV}\). It is important for a high trigger efficiency and can be seen in fig.~\ref{fig:invmass}. Also, it has a huge impact on the background because it usually consists of way lighter particles. The q* on the other hand is expected to have a very high -invariant mass of more than 1 TeV. The distribution should be a smoothly -falling function for the QCD background and peak at the simulated -resonance mass for the signal events. +invariant mass of more than 1 TeV. The \(m_{jj}\) distribution should be +a smoothly falling function for the QCD background and peak at the +simulated resonance mass for the signal events. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -828,8 +891,7 @@ qZ are in the range of 46 \% (1.6 TeV) to 50 \% (7 TeV). Here, the background could be reduced to 8 \% of the original events. So while keeping around 50 \% of the signal, the background was already reduced to less than a tenth. Still, as can be seen in fig.~\ref{fig:njets} to -fig.~\ref{fig:invmass}, the amount of signal is very low and, without -logarithmic scale, even has to be amplified to be visible. +fig.~\ref{fig:invmass}, the amount of signal is very low. \hypertarget{data---monte-carlo-comparison}{% \subsection{Data - Monte Carlo @@ -840,9 +902,14 @@ being compared to the actual data of the corresponding year collected by the CMS detector. This is done for the year 2016 and for the combined data of years 2016, 2017 and 2018. The distributions are rescaled so the integral over the invariant mass distribution of data and simulation are -the same. In fig.~\ref{fig:data-mc}, the three distributions that cuts -were applied on can be seen for year 2016 and the combined data of years -2016 to 2018. +the same. In fig.~\ref{fig:data-mc}, the three distributions of the +variables that were used for the preselection can be seen for year 2016 +and the combined data of years 2016 to 2018. For analysing the real data +from the CMS, jet energy corrections have to be applied. Those are to +calibrate the ECAL and HCAL parts of the CMS, so the energy of the +detected particles can be measured correctly. The corrections used were +published by the CMS group. {[}source needed, but not sure where to find +it{]} \begin{figure} \begin{minipage}{0.33\textwidth} @@ -876,23 +943,20 @@ simulation. \hypertarget{sideband}{% \subsubsection{Sideband}\label{sideband}} -The sideband is introduced to make sure there are no unwanted side -effects of the used cuts. It is a region in which no data is used for -the actual analysis. Again, data and the Monte Carlo simulation are -compared. For this analysis, the region where the softdropmass of both -of the two jets with the highest transverse momentum (\(p_t\)) is more -than 105 GeV was chosen. Because the decay of a q* to a vector boson is -being investigated, later on, a selection is applied that one of those -particles has to have a mass between 105 GeV and 35 GeV. Therefore -events with jets with a softdropmass higher than 105 GeV will not be -used for this analysis which makes them a good sideband to use. - -In fig.~\ref{fig:sideband}, the comparison of data with simulation in -the sideband region can be seen for the softdropmass distribution as -well as the dijet invariant mass distribution. As in {[}fig:data-mc{]}, -the histograms are rescaled, so that the dijet invariant mass -distributions of data and simulation have the same integral. It can be -seen, that in the sideband region data and simulation match very well. +The sideband is introduced to make sure no bias in the data and Monte +Carlo simulation is introduced. It is a region in which no signal event +is expected. Again, data and the Monte Carlo simulation are compared. +For this analysis, the region where the softdropmass of both of the two +jets with the highest transverse momentum (\(p_t\)) is more than 105 GeV +was chosen. 105 GeV is well above the mass of 91 GeV of the Z boson, the +heavier vector boson. Therefore it is very unlikely that a particle +heavier than t In fig.~\ref{fig:sideband}, the comparison of data with +simulation in the sideband region can be seen for the softdropmass +distribution as well as the dijet invariant mass distribution. As in +{[}fig:data-mc{]}, the histograms are rescaled, so that the dijet +invariant mass distributions of data and simulation have the same +integral. It can be seen, that in the sideband region data and +simulation match very well. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -917,27 +981,32 @@ combined data from 2016, 2017 and 2018.} \hypertarget{jet-substructure-selection}{% \section{Jet substructure selection}\label{jet-substructure-selection}} -So far it was made sure, that the actual data and the simulation match -well after the preselection and no unwanted side effects are introduced -in the data by the used cuts. Now another selection has to be +So far it was made sure, that the actual data and the simulation are in +good agreement after the preselection and no unwanted side effects are +introduced in the data by the used cuts. Now another selection has to be introduced, to further reduce the background to be able to extract the hypothetical signal events from the actual data. This is done by distinguishing between QCD and signal events using a tagger to identify jets coming from a vector boson. Two different -taggers will be used to later compare the results. The decay analysed -includes either a W or Z boson, which are, compared to the particles in -QCD effects, very heavy. This can be used by adding a cut on the -softdropmass of a jet. The softdropmass of at least one of the two -leading jets is expected to be within \(\SI{35}{\giga\eV}\) and +taggers will be used to later compare their performance. The decay +analysed includes either a W or Z boson, which are, compared to the +particles in QCD effects, very heavy. This can be used by adding a cut +on the softdropmass of a jet. The softdropmass of at least one of the +two leading jets is expected to be within \(\SI{35}{\giga\eV}\) and \(\SI{105}{\giga\eV}\). This cut already provides a good separation of QCD and signal events, on which the two taggers presented next can build. +Both taggers provide a discriminator value to choose whether an event +originates in the decay of a vector boson or from QCD effects. This +value will be optimized afterwards to make sure the maximum efficiency +possible is achieved. + \hypertarget{n-subjettiness}{% \subsection{N-Subjettiness}\label{n-subjettiness}} -The N-subjettiness \(\tau_n\) is a jet shape parameter designed to +The N-subjettiness \(\tau_N\) is a jet shape parameter designed to identify boosted hadronically-decaying objects. When a vector boson decays hadronically, it produces two quarks each causing a jet. But because of the high mass of the vector bosons, the particles are highly @@ -959,9 +1028,13 @@ rather than using \(\tau_N\) directly, the ratio \(\tau_{21} = \tau_2/\tau_1\) is a better discriminator between QCD events and events originating from the decay of a boosted vector boson. -The \(\tau_{21}\) cut is applied to the one of the two highest \(p_t\) -jets passing the softdropmass window. If both of them pass, it is -applied to the one with higher \(p_t\). +The lower the \(\tau_{21}\) is, the more likely a jet is caused by the +decay of a vector boson. Therefore a selection will be introduced, so +that \(\tau_{21}\) of one candidate jet is smaller then some value that +will be determined by an optimization process described in the next +chapter. As candidate jet the one of the two highest \(p_t\) jets +passing the softdropmass window is used. If both of them pass, the one +with higher \(p_t\) is chosen. \hypertarget{deepak8}{% \subsection{DeepAK8}\label{deepak8}} @@ -994,12 +1067,15 @@ tagger of heavy resonances. As the mass variable is already in use for the softdropmass selection, this version of the tagger is to be preferred. -Just like the \(\tau_{21}\) cut, the cut on the discriminator introduced -by the DeepAK8 tagger is applied on the one of the two highest \(p_t\) -jets passing the softdropmass window. +The higher the discriminator value of the deep boosted tagger, the more +likely is the jet to be caused by decay of a vector boson. Therefore, +using the same way to choose a candidate jet as for the N-subjettiness +tagger, a selection is applied so that this candidate jet has a +WvsQCD/ZvsQCD value greater than some value determined by the +optimization presented next. -\hypertarget{optimization}{% -\subsection{Optimization}\label{optimization}} +\hypertarget{sec:opt}{% +\subsection{Optimization}\label{sec:opt}} To figure out the best value to cut on the discriminators introduced by the two taggers, a value to quantify how good a cut is has to be @@ -1008,12 +1084,16 @@ introduced. For that, the significance calculated by events and B for the amount of background events in a given interval. This value assumes a gaussian error on the background so it will be calculated for the 2 TeV masspoint where enough background events exist -to justify this assumption. This follows from the central limit theorem +to justify this assumption. It follows from the central limit theorem that states, that for identical distributed random variables, their sum -converges to a gaussian distribution. The value therefore represents how -good the signal can be distinguished from the background in units of the -standard deviation of the background. As interval, a 10 \% margin around -the masspoint is chosen. +converges to a gaussian distribution. The significance therefore +represents how good the signal can be distinguished from the background +in units of the standard deviation of the background. As interval, a 10 +\% margin around the resonance nominal mass is chosen. The significance +is then calculated for different selections on the discriminant of the +two taggers and then plotted in dependence on the minimum resp. maximum +allowed value of the discriminant to pass the selection for the deep +boosted resp. the N-subjettiness tagger. \begin{figure} \begin{minipage}{0.5\textwidth} @@ -1033,67 +1113,341 @@ higher significance but as it is very close to the edge where the significance drops very low and the higher the cut the less background will be left to calculate the cross section limits, especially at higher resonance masses, the slightly less strict cut is chosen. The -significance for the \(\tau_{21}\) cut is 14.0818, and for the deep -boosted tagger 25.6097. For both taggers also a low purity category is -introduced for high TeV regions. Using the cuts optimized for 2 TeV, -there are very few background events left for higher resonance masses, -but to reliably calculate cross section limits, those are needed. As low -purity category for the N-subjettiness tagger, a cut at -\(0.35 < \tau_{21} < 0.75\) is used. For the deep boosted tagger the -opposite cut from the high purity category is used: \(VvsQCD < 0.95\). +significance for the \(\tau_{21}\) cut is 14, and for the deep boosted +tagger 26. -\hypertarget{signal-extraction}{% -\section{Signal extraction}\label{signal-extraction}} +For both taggers also a low purity category is introduced for high TeV +regions. Using the cuts optimized for 2 TeV, there are very few +background events left for higher resonance masses, but to reliably +calculate cross section limits, those are needed. As low purity category +for the N-subjettiness tagger, a cut at \(0.35 < \tau_{21} < 0.75\) is +used. For the deep boosted tagger the opposite cut from the high purity +category is used: \(VvsQCD < 0.95\). + +\hypertarget{sec:extr}{% +\section{Signal extraction}\label{sec:extr}} + +After the optimization, now the optimal selection for the N-subjettiness +as well as the deep boosted tagger is found and applied to the simulated +samples as well as the data collected by the CMS. The fit described in +sec.~\ref{sec:moa} is performed for all masspoints of the decay to qW +and qZ and for both datasets used, the one from 2016 und the combined +one of 2016, 2017 and 2018. To extract the signal from the background, its cross section limit is -calculated using a frequentist asymptotic limit calculator. It uses a -fit to the simulated samples to calculate expected limits for all the -available masspoints and then uses a fit to the actual data to determine -an observed limit. If there's no resonance of the q* particle in the -data, the observed limit should lie within the \(2\sigma\) environment -of the expected limit. After that, the crossing of the theory line, -representing the cross section limits expected, if the q* particle would -exist, and the observed data is calculated, to have a limit of mass up -to which the existence of the q* particle can be excluded. To find the -uncertainty of this result, the crossing of the theory line plus, -respectively minus, its uncertainty with the observed limit is also -calculated. +calculated using a frequentist asymptotic limit calculator. It uses the +fit that was performed to the simulated samples to calculate expected +limits for all the available masspoints and then a fit to the actual +data to determine an observed limit. If there's no resonance of the q* +particle in the data, the observed limit should lie within the +\(2\sigma\) environment of the expected limit. After that, the crossing +of the theory line, representing the cross section limits expected, if +the q* particle would exist, and the observed data is calculated, to +have a limit of mass up to which the existence of the q* particle can be +excluded. To find the uncertainty of this result, the crossing of the +theory line plus, respectively minus, its uncertainty with the observed +limit is also calculated. \hypertarget{uncertainties}{% \subsection{Uncertainties}\label{uncertainties}} -The following uncertainties are considered: +For calculating the cross section of the signal, four sources of +uncertainties are considered. -\begin{itemize} -\tightlist -\item - \emph{Luminosity}: the integrated luminosity of the LHC has an - uncertainty of 2.5 \%. -\item - \emph{Jet Energy Corrections}: for the Jet Energy Corrections, an - uncertainty of 2 \% is assumed. -\item - \emph{Tagger Efficiency(?)}: 6 \% (TODO!) -\item - \emph{Parameter Uncertainty of the fit}: The CombinedLimit program - used for determining the cross section varies the parameters used for - the fit and therefore includes their uncertainties to calculate the - final result. -\end{itemize} +First, the uncertainty of the Jet Energy Corrections. When measuring a +particle's energy with the ECAL or HCAL part of the CMS, the electronic +signals send by the photodetectors in the calorimeters have to be +converted to actual energy values. Therefore an error in this +calibration causes the energy measured to be shifted to higher or lower +values causing also the position of the signal peak in the \(m_{jj}\) +distribution to vary. The uncertainty is approximated to be 2 \%. + +Second, the tagger is not perfect and therefore some events, that don't +originate from a V boson are wrongly chosen and on the other hand +sometimes events that do originate from one are not. It influences the +events chose for analysis and is therefore also considered as an +uncertainty, which is approximated to be 6 \%. + +Third, the uncertainty of the parameters of the background fit is also +considered, as it might change the background shape a little and +therefore influence how many signal and background events are +reconstructed from the data. + +Fourth, the uncertainty on the Luminosity of the LHC of 2.5 \% is also +taken into account for the final results. \hypertarget{results}{% \section{Results}\label{results}} -In this chapter the results and a comparison to previous research will -be shown as well as a comparisos n between the two different taggers -used. +This chapter will start by presenting the results for the data of year +2016 using both taggers and comparing it to the previous research +\autocite{PREV_RESEARCH}. It will then go on showing the results for the +combined dataset, again using both taggers comparing their performances. \hypertarget{section}{% \subsection{2016}\label{section}} Using the data collected by the CMS experiment on 2016, the cross -section limits seen in fig.~\ref{fig:res2016} were obtained. The -extracted cross section limits are: +section limits seen in fig.~\ref{fig:res2016} were obtained. + +As described in sec.~\ref{sec:extr}, the calculated cross section limits +are used to then calculate a mass limit, meaning the lowest possible +mass of the q* particle, by finding the crossing of the theory line with +the observed cross section limit. In fig.~\ref{fig:res2016} it can be +seen, that the observed limit in the region where theory and observed +limit cross is very high compared to when using the N-subjettiness +tagger. Therefore the two lines cross earlier, which results in lower +exclusion limits on the mass of the q* particle causing the deep boosted +tagger to perform worse than the N-subjettiness tagger in regards of +establishing those limits as can be seen in \{tbl.~\ref{tbl:res2016}\}. +The table also shows the upper and lower limits on the mass found by +calculating the crossing of the theory plus resp. minus its uncertainty. +Due to the theory and the observed limits line being very flat in the +high TeV region, even a small uncertainty of the theory can cause a high +difference of the mass limit. + +\hypertarget{tbl:res2016}{} +\begin{longtable}[]{@{}lllll@{}} +\caption{\label{tbl:res2016}Mass limits found using the data collected +in 2016}\tabularnewline +\toprule +Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit +{[}TeV{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit +{[}TeV{]}\tabularnewline +\midrule +\endhead +qW & \(\tau_{21}\) & 5.39 & 6.01 & 4.99\tabularnewline +qW & deep boosted & 4.96 & 5.19 & 4.84\tabularnewline +qZ & \(\tau_{21}\) & 4.86 & 4.96 & 4.70\tabularnewline +qZ & deep boosted & 4.49 & 4.61 & 4.40\tabularnewline +\bottomrule +\end{longtable} + +\begin{figure} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf} + \end{minipage} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqW_2016db_13TeV.pdf} + \end{minipage} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqZ_2016tau_13TeV.pdf} + \end{minipage} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf} + \end{minipage} +\caption{Results of the cross section limits for 2016 using the $\tau_{21}$ tagger (left) and the deep boosted tagger +(right).} +\label{fig:res2016} +\end{figure} + +\hypertarget{previous-research}{% +\subsubsection{Previous research}\label{previous-research}} + +The limit established by using the N-subjettiness tagger on the 2016 +data is already slightly higher than the one from previous research, +which was found to be 5 TeV for the decay to qW and 4.7 TeV for the +decay to qZ. This is mainly due to the fact, that in our data, the +observed limit at the intersection point happens to be in the lower +region of the expected limit interval and therefore causing a very late +crossing with the theory line when using the N-subjettiness tagger (as +can be seen in fig.~\ref{fig:res2016}). This could be caused by small +differences of the setup used or slightly differently processed data. +Comparing the expected limits, there is a difference between 3 \% and 30 +\%, between the values calculated by this thesis compared to the +previous research. It is not, however, that one of the two results was +constantly lower or higher but rather fluctuating. Therefore it can be +said, that the results are in good agreement. The cross section limits +of the previous research can be seen in fig.~\ref{fig:prev}. + +\begin{figure} +\begin{minipage}{0.5\textwidth} +\includegraphics{./figures/results/prev_qW.png} +\end{minipage} +\begin{minipage}{0.5\textwidth} +\includegraphics{./figures/results/prev_qZ.png} +\end{minipage} +\caption{Previous results of the cross section limits for q\* decaying to qW (left) and q\* decaying to qZ (right). +Taken from \cite{PREV_RESEARCH}.} +\label{fig:prev} +\end{figure} + +\hypertarget{combined-dataset}{% +\subsection{Combined dataset}\label{combined-dataset}} + +Using the combined data, the cross section limits seen in +fig.~\ref{fig:resCombined} were obtained. The cross section limits are, +compared to only using the 2016 dataset, almost cut in half. This shows +the big improvement achieved by using more than three times the amount +of data. + +The results for the mass limits of the combined years are as follows: + +\begin{longtable}[]{@{}lllll@{}} +\caption{Mass limits found using the data collected in 2016 - +2018}\tabularnewline +\toprule +Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit +{[}TeV{]}\tabularnewline +\midrule +\endfirsthead +\toprule +Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit +{[}TeV{]}\tabularnewline +\midrule +\endhead +qW & \(\tau_{21}\) & 6.00 & 6.26 & 5.74\tabularnewline +qW & deep boosted & 6.11 & 6.31 & 5.39\tabularnewline +qZ & \(\tau_{21}\) & 5.49 & 5.76 & 5.29\tabularnewline +qZ & deep boosted & 4.92 & 5.02 & 4.80\tabularnewline +\bottomrule +\end{longtable} + +The combination of the three years not just improved the cross section +limits, but also the limit for the mass of the q* particle. The final +result is 1 TeV higher for the decay to qW and almost 0.8 TeV higher for +the decay to qZ than what was concluded by the previous research +\autocite{PREV_RESEARCH}. + +\begin{figure} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf} + \end{minipage} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqW_Combineddb_13TeV.pdf} + \end{minipage} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqZ_Combinedtau_13TeV.pdf} + \end{minipage} + \begin{minipage}{0.5\textwidth} + \includegraphics{./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV.pdf} + \end{minipage} +\caption{Results of the cross section limits for the three combined years using the $\tau_{21}$ tagger (left) and the +deep boosted tagger (right).} +\label{fig:resCombined} +\end{figure} + +\hypertarget{comparison-of-taggers}{% +\subsection{Comparison of taggers}\label{comparison-of-taggers}} + +The previously shown results already show, that the deep boosted tagger +was not able to significantly improve the results compared to the +N-subjettiness tagger. For further comparison, in +fig.~\ref{fig:limit_comp} the expected limits of the different taggers +for the q* \(\rightarrow\) qW and the q* \(\rightarrow\) qZ decay are +shown. It can be seen, that the deep boosted is at best as good as the +N-subjettiness tagger. This was not the expected result, as the deep +neural network was already found to provide a higher significance in the +optimisation done in sec.~\ref{sec:opt}. The higher significance should +also result in lower cross section limits. Apparently, doing the +optimization only on data of the year 2018, was not the best choice. To +make sure, there is no mistake in the setup, also the expected cross +section limits using only the high purity category of the two taggers +with 2018 data are compared in fig.~\ref{fig:comp_2018}. There, the +cross section limits calculated using the deep boosted tagger are a bit +lower than with the N-subjettiness tagger, showing, that the method used +for optimisation was working but should have been applied to the +combined dataset. + +Recently, some issues with the training of the deep boosted tagger used +in this analysis were also found, which might explain, why it didn't +perform much better in general. + +\begin{figure} +\hypertarget{fig:comp_2018}{% +\centering +\includegraphics{./figures/limit_comp_2018.pdf} +\caption{Comparision of deep boosted and N-subjettiness tagger in the +high purity category using the data from year +2018.}\label{fig:comp_2018} +} +\end{figure} + +\begin{figure} +\begin{minipage}{0.5\textwidth} +\includegraphics{./figures/limit_comp_w.pdf} +\end{minipage} +\begin{minipage}{0.5\textwidth} +\includegraphics{./figures/limit_comp_z.pdf} +\end{minipage} +\caption{Comparison of expected limits of the different taggers using different datasets. Left: decay to qW. Right: +decay to qZ} +\label{fig:limit_comp} +\end{figure} + +\clearpage +\newpage + +\hypertarget{summary}{% +\section{Summary}\label{summary}} + +In this thesis, a limit on the mass of the q* particle has been +successfully established. By combining the data from the years 2016, +2017 and 2018, collected by the CMS experiment, the previously set limit +could be significantly improved. + +For the data analysis, the following selection was applied: + +\begin{itemize} +\tightlist +\item + \#jets \textgreater= 2 +\item + \(\Delta\eta < 1.4\) +\item + \(m_{jj} >= \SI{1050}{\giga\eV}\) +\item + \(\SI{35}{\giga\eV} < m_{SDM} < \SI{105}{\giga\eV}\) +\end{itemize} + +For the deep boosted tagger, a high purity category of \(VvsQCD > 0.95\) +and a low purity category of \(VvsQCD <= 0.95\) was used. For the +N-subjettiness tagger the high purity category was \(\tau_{21} < 0.35\) +and the low purity category \(0.35 < \tau_{21} < 0.75\). These values +were found by optimizing for the highest possible significance of the +signal. + +After the selection, the cross section limits were extracted from the +data and new exclusion limits for the mass of the q* particles set. +These are 6.1 TeV by analyzing the decay to qW, respectively 5.5 TeV for +the decay to qZ. Those limits are about 1 TeV higher than the ones found +in previous research, that found them to be 5 TeV resp. 4.7 TeV. + +Two different taggers were used to compare the result. The newer deep +boosted tagger was found to not improve the result over the older +N-subjettiness tagger. This was rather unexpected but might be caused by +some training issues, that were identified lately. + +This research can also be used to test other theories of the q* particle +that predict its existence at lower masses, than the one used, by +overlaying the different theory curves in the plots shown in +fig.~\ref{fig:res2016} and fig.~\ref{fig:resCombined}. + +The optimization process used to find the optimal values for the +discriminant provided by the taggers, was found to not be optimal. It +was only done using 2018 data, with which the deep boosted tagger showed +a higher significance than the N-subjettiness tagger. Apparently, the +assumption, that the same optimization would apply to the data of the +other years as well, did not hold. Using the combined dataset, the deep +boosted tagger showed no better cross section limits than the +N-subjettiness tagger, which are directly related to the significance +used for the optimization. Therefore, with a better optimization and the +fixed training issues of the deep boosted tagger, it is very likely, +that the result presented could be further improved. + +\newpage + +\nocite{*} + +\printbibliography + +\newpage +\hypertarget{appendix}{% +\section*{Appendix}\label{appendix}} \begin{longtable}[]{@{}lllll@{}} \caption{Cross Section limits using 2016 data and the N-subjettiness @@ -1203,82 +1557,6 @@ limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline \bottomrule \end{longtable} -Using the deep boosted tagger, the observed limit in the region where -theory and observed limit cross is very high compared to when using the -N-subjettiness tagger. This causes the tagger to perform worse than the -older tagger as the crossing of the two lines therefore happens earlier. - -\begin{longtable}[]{@{}lllll@{}} -\caption{Mass limits found using the data collected in -2016}\tabularnewline -\toprule -Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit -{[}TeV{]}\tabularnewline -\midrule -\endfirsthead -\toprule -Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit -{[}TeV{]}\tabularnewline -\midrule -\endhead -qW & \(\tau_{21}\) & 5.39 & 6.01 & 4.99\tabularnewline -qW & deep boosted & 4.96 & 5.19 & 4.84\tabularnewline -qZ & \(\tau_{21}\) & 4.86 & 4.96 & 4.70\tabularnewline -qZ & deep boosted & 4.49 & 4.61 & 4.40\tabularnewline -\bottomrule -\end{longtable} - -\begin{figure} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqW_2016tau_13TeV.pdf} - \end{minipage} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqW_2016db_13TeV.pdf} - \end{minipage} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqZ_2016tau_13TeV.pdf} - \end{minipage} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqZ_2016db_13TeV.pdf} - \end{minipage} -\caption{Results of the cross section limits for 2016 using the $\tau_{21}$ tagger (left) and the deep boosted tagger -(right).} -\label{fig:res2016} -\end{figure} - -\hypertarget{previous-research}{% -\subsubsection{Previous research}\label{previous-research}} - -The limit is already slightly higher than the one from previous -research, which was found to be 5 TeV for the decay to qW and 4.7 TeV -for the decay to qZ. This is mainly due to the fact, that in our data, -the observed limit at the intersection point happens to be in the lower -region of the expected limit interval and therefore causing a very late -crossing with the theory line when using the N-subjettiness tagger. This -could be caused by small differences of the setup used or slightly -differently processed data. In general, the results appear to be very -similar to the previous research, seen in fig.~\ref{fig:prev}. - -\begin{figure} -\begin{minipage}{0.5\textwidth} -\includegraphics{./figures/results/prev_qW.png} -\end{minipage} -\begin{minipage}{0.5\textwidth} -\includegraphics{./figures/results/prev_qZ.png} -\end{minipage} -\caption{Previous results of the cross section limits for q\* decaying to qW (left) and q\* decaying to qZ (right). -Taken from \cite{PREV_RESEARCH}.} -\label{fig:prev} -\end{figure} - -\hypertarget{section-1}{% -\subsection{2016 + 2017 + 2018}\label{section-1}} - -Using the combined data, the cross section limits seen in -fig.~\ref{fig:resCombined} were obtained. It is quite obvious, that the -limits are already significantly lower than when only using the data of -2016. The extracted cross section limits are the following: - \begin{longtable}[]{@{}lllll@{}} \caption{Cross Section limits using the combined data and the N-subjettiness tagger for the decay to qW}\tabularnewline @@ -1387,108 +1665,4 @@ limit {[}pb{]} & Obs. limit {[}pb{]}\tabularnewline \bottomrule \end{longtable} -The results for the mass limits of the combined years are as follows: - -\begin{longtable}[]{@{}lllll@{}} -\caption{Mass limits found using the data collected in 2016 - -2018}\tabularnewline -\toprule -Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit -{[}TeV{]}\tabularnewline -\midrule -\endfirsthead -\toprule -Decay & Tagger & Limit {[}TeV{]} & Upper Limit {[}TeV{]} & Lower Limit -{[}TeV{]}\tabularnewline -\midrule -\endhead -qW & \(\tau_{21}\) & 6.00 & 6.26 & 5.74\tabularnewline -qW & deep boosted & 6.11 & 6.31 & 5.39\tabularnewline -qZ & \(\tau_{21}\) & 5.49 & 5.76 & 5.29\tabularnewline -qZ & deep boosted & 4.92 & 5.02 & 4.80\tabularnewline -\bottomrule -\end{longtable} - -\begin{figure} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqW_Combinedtau_13TeV.pdf} - \end{minipage} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqW_Combineddb_13TeV.pdf} - \end{minipage} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqZ_Combinedtau_13TeV.pdf} - \end{minipage} - \begin{minipage}{0.5\textwidth} - \includegraphics{./figures/results/brazilianFlag_QtoqZ_Combineddb_13TeV.pdf} - \end{minipage} -\caption{Results of the cross section limits for the three combined years using the $\tau_{21}$ tagger (left) and the -deep boosted tagger (right).} -\label{fig:resCombined} -\end{figure} - -The combination of the three years has a big impact on the result. The -final limit is 1 TeV higher than what could previously be concluded. - -\hypertarget{comparison-of-taggers}{% -\subsection{Comparison of taggers}\label{comparison-of-taggers}} - -The previously shown results already show, that the deep boosted tagger -was not able to significantly improve the results compared to the -N-subjettiness tagger. For further comparison, in -fig.~\ref{fig:limit_comp} the expected limits of the different taggers -for the q* \(\rightarrow\) qW and the q* \(\rightarrow\) qZ decay are -shown. It can be seen, that the deep boosted is at best as good as the -N-subjettiness tagger. This was not the expected result, as the deep -neural network was supposed to provide better separation between signal -and background events than the older N-subjettiness tagger. Recently, -some issues with the training of the deep boosted tagger used in this -analysis were found, so those might explain the bad performance. - -\begin{figure} -\begin{minipage}{0.5\textwidth} -\includegraphics{./figures/limit_comp_w.pdf} -\end{minipage} -\begin{minipage}{0.5\textwidth} -\includegraphics{./figures/limit_comp_z.pdf} -\end{minipage} -\caption{Comparison of expected limits of the different taggers using different datasets. Left: decay to qW. Right: -decay to qZ} -\label{fig:limit_comp} -\end{figure} - -\newpage - -\hypertarget{summary}{% -\section{Summary}\label{summary}} - -In this thesis, a limit on the mass of the q* particle has been -successfully established. By combining the data from the years 2016, -2017 and 2018, collected by the CMS experiment, the previously set limit -could be significantly improved. For that, a combined fit to the QCD -background and signal had to be performed and the cross section limits -extracted. Also, the new deep boosted tagger, using a deep neural -network, was compared to the older N-subjettiness tagger and found to -not significantly change the result, neither to the better nor to the -worse. Due to some training issues identified lately, there is still a -good chance, that, with that issue fixed, it will be able to further -improve the results. Also previously research of the 2016 data was -repeated and the results compared. The previous research arrived at a -exclusion limit up to 5 TeV resp. 4.7 TeV for the decay to qW resp. qZ, -this thesis at 5.4 TeV resp. 4.9 TeV. The difference can be explained by -small differences in the data used and the setup itself. After that, -using the combined data, the limit could be significantly improved to -exclude the q* particle up to a mass of 6.2 TeV resp. 5.5 TeV. With the -research presented in this thesis, it would also be possible to test -other theories of the q* particle that predict its existence at lower -masses, than the one used, by overlaying the different theory curves in -the plots shown in fig.~\ref{fig:res2016} and -fig.~\ref{fig:resCombined}. - -\newpage - -\nocite{*} - -\printbibliography - \end{document} diff --git a/thesis.toc b/thesis.toc index 6f60cfc..5ea8812 100644 --- a/thesis.toc +++ b/thesis.toc @@ -1,38 +1,38 @@ \boolfalse {citerequest}\boolfalse {citetracker}\boolfalse {pagetracker}\boolfalse {backtracker}\relax \babel@toc {british}{} \contentsline {section}{\numberline {1}Introduction}{1}{section.1}% -\contentsline {section}{\numberline {2}Theoretical background}{2}{section.2}% +\contentsline {section}{\numberline {2}Theoretical motivation}{2}{section.2}% \contentsline {subsection}{\numberline {2.1}Standard model}{2}{subsection.2.1}% -\contentsline {subsubsection}{\numberline {2.1.1}Quantum Chromodynamic background}{3}{subsubsection.2.1.1}% -\contentsline {subsubsection}{\numberline {2.1.2}Shortcomings of the Standard Model}{3}{subsubsection.2.1.2}% +\contentsline {subsubsection}{\numberline {2.1.1}Shortcomings of the Standard Model}{4}{subsubsection.2.1.1}% \contentsline {subsection}{\numberline {2.2}Excited quark states}{4}{subsection.2.2}% -\contentsline {section}{\numberline {3}Experimental Setup}{6}{section.3}% -\contentsline {subsection}{\numberline {3.1}Large Hadron Collider}{6}{subsection.3.1}% -\contentsline {subsection}{\numberline {3.2}Compact Muon Solenoid}{6}{subsection.3.2}% -\contentsline {subsubsection}{\numberline {3.2.1}Coordinate conventions}{7}{subsubsection.3.2.1}% -\contentsline {subsubsection}{\numberline {3.2.2}The tracking system}{7}{subsubsection.3.2.2}% -\contentsline {subsubsection}{\numberline {3.2.3}The electromagnetic calorimeter}{7}{subsubsection.3.2.3}% -\contentsline {subsubsection}{\numberline {3.2.4}The hadronic calorimeter}{8}{subsubsection.3.2.4}% -\contentsline {subsubsection}{\numberline {3.2.5}The solenoid}{8}{subsubsection.3.2.5}% -\contentsline {subsubsection}{\numberline {3.2.6}The muon system}{8}{subsubsection.3.2.6}% -\contentsline {subsubsection}{\numberline {3.2.7}The Trigger system}{8}{subsubsection.3.2.7}% -\contentsline {subsubsection}{\numberline {3.2.8}The Particle Flow algorithm}{8}{subsubsection.3.2.8}% -\contentsline {subsection}{\numberline {3.3}Jet clustering}{9}{subsection.3.3}% -\contentsline {section}{\numberline {4}Method of analysis}{11}{section.4}% +\contentsline {subsubsection}{\numberline {2.2.1}Quantum Chromodynamic background}{5}{subsubsection.2.2.1}% +\contentsline {section}{\numberline {3}Experimental Setup}{7}{section.3}% +\contentsline {subsection}{\numberline {3.1}Large Hadron Collider}{7}{subsection.3.1}% +\contentsline {subsection}{\numberline {3.2}Compact Muon Solenoid}{7}{subsection.3.2}% +\contentsline {subsubsection}{\numberline {3.2.1}Coordinate conventions}{8}{subsubsection.3.2.1}% +\contentsline {subsubsection}{\numberline {3.2.2}The tracking system}{8}{subsubsection.3.2.2}% +\contentsline {subsubsection}{\numberline {3.2.3}The electromagnetic calorimeter}{9}{subsubsection.3.2.3}% +\contentsline {subsubsection}{\numberline {3.2.4}The hadronic calorimeter}{9}{subsubsection.3.2.4}% +\contentsline {subsubsection}{\numberline {3.2.5}The solenoid}{9}{subsubsection.3.2.5}% +\contentsline {subsubsection}{\numberline {3.2.6}The muon system}{9}{subsubsection.3.2.6}% +\contentsline {subsubsection}{\numberline {3.2.7}The Trigger system}{9}{subsubsection.3.2.7}% +\contentsline {subsubsection}{\numberline {3.2.8}The Particle Flow algorithm}{9}{subsubsection.3.2.8}% +\contentsline {subsection}{\numberline {3.3}Jet clustering}{10}{subsection.3.3}% +\contentsline {section}{\numberline {4}Method of analysis}{12}{section.4}% \contentsline {subsection}{\numberline {4.1}Signal and Background modelling}{12}{subsection.4.1}% -\contentsline {section}{\numberline {5}Preselection and data quality}{13}{section.5}% -\contentsline {subsection}{\numberline {5.1}Preselection}{13}{subsection.5.1}% -\contentsline {subsection}{\numberline {5.2}Data - Monte Carlo Comparison}{17}{subsection.5.2}% -\contentsline {subsubsection}{\numberline {5.2.1}Sideband}{18}{subsubsection.5.2.1}% +\contentsline {section}{\numberline {5}Preselection and data quality}{14}{section.5}% +\contentsline {subsection}{\numberline {5.1}Preselection}{14}{subsection.5.1}% +\contentsline {subsection}{\numberline {5.2}Data - Monte Carlo Comparison}{18}{subsection.5.2}% +\contentsline {subsubsection}{\numberline {5.2.1}Sideband}{19}{subsubsection.5.2.1}% \contentsline {section}{\numberline {6}Jet substructure selection}{20}{section.6}% \contentsline {subsection}{\numberline {6.1}N-Subjettiness}{20}{subsection.6.1}% -\contentsline {subsection}{\numberline {6.2}DeepAK8}{20}{subsection.6.2}% +\contentsline {subsection}{\numberline {6.2}DeepAK8}{21}{subsection.6.2}% \contentsline {subsection}{\numberline {6.3}Optimization}{21}{subsection.6.3}% \contentsline {section}{\numberline {7}Signal extraction}{22}{section.7}% -\contentsline {subsection}{\numberline {7.1}Uncertainties}{22}{subsection.7.1}% -\contentsline {section}{\numberline {8}Results}{22}{section.8}% +\contentsline {subsection}{\numberline {7.1}Uncertainties}{23}{subsection.7.1}% +\contentsline {section}{\numberline {8}Results}{23}{section.8}% \contentsline {subsection}{\numberline {8.1}2016}{23}{subsection.8.1}% -\contentsline {subsubsection}{\numberline {8.1.1}Previous research}{26}{subsubsection.8.1.1}% -\contentsline {subsection}{\numberline {8.2}2016 + 2017 + 2018}{26}{subsection.8.2}% -\contentsline {subsection}{\numberline {8.3}Comparison of taggers}{28}{subsection.8.3}% -\contentsline {section}{\numberline {9}Summary}{30}{section.9}% +\contentsline {subsubsection}{\numberline {8.1.1}Previous research}{24}{subsubsection.8.1.1}% +\contentsline {subsection}{\numberline {8.2}Combined dataset}{25}{subsection.8.2}% +\contentsline {subsection}{\numberline {8.3}Comparison of taggers}{25}{subsection.8.3}% +\contentsline {section}{\numberline {9}Summary}{29}{section.9}%