Learning PDFs through Interpretable Latent Representations in Mellin Space
Author:
Brandon Kriesten, T. J. Hobbs
Keyword:
High Energy Physics - Phenomenology, High Energy Physics - Phenomenology (hep-ph), High Energy Physics - Lattice (hep-lat), Nuclear Theory (nucl-th)
journal:
ANL-186490
date:
2023-12-04 00:00:00
Abstract
Representing the parton distribution functions (PDFs) of the proton and other hadrons through flexible, high-fidelity parametrizations has been a long-standing goal of particle physics phenomenology. This is particularly true since the chosen parametrization methodology can play an influential role in the ultimate PDF uncertainties as extracted in QCD global analyses; these, in turn, are often determinative of the reach of experiments at the LHC and other facilities to non-standard physics, including at large $x$, where parametrization effects can be significant. In this study, we explore a series of encoder-decoder machine-learning (ML) models with various neural-network topologies as efficient means of reconstructing PDFs from meaningful information stored in an interpretable latent space. Given recent effort to pioneer synergies between QCD analyses and lattice-gauge calculations, we formulate a latent representation based on the behavior of PDFs in Mellin space, i.e., their integrated moments, and test the ability of various models to decode PDFs from this information faithfully. We introduce a numerical package, $\texttt{PDFdecoder}$, which implements several encoder-decoder models to reconstruct PDFs with high fidelity and use this tool to explore strengths and pitfalls of neural-network approaches to PDF parametrization. We additionally dissect patterns of learned correlations between encoded Mellin moments and reconstructed PDFs which suggest opportunities for further improvements to ML-based approaches to PDF parametrizations and uncertainty quantification.