The universe is worth 643 pixels: convolution neural network and vision transformers for cosmology

Se Yeon Hwang, Cristiano G. Sabiu, Inkyu Park, Sungwook E. Hong

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

We present a novel approach for estimating cosmological parameters, Ωm , σ8 , w 0, and one derived parameter, S 8, from 3D lightcone data of dark matter halos in redshift space covering a sky area of 40° × 40° and redshift range of 0.3 < z < 0.8, binned to 643 voxels. Using two deep learning algorithms — Convolutional Neural Network (CNN) and Vision Transformer (ViT) — we compare their performance with the standard two-point correlation (2pcf) function. Our results indicate that CNN yields the best performance, while ViT also demonstrates significant potential in predicting cosmological parameters. By combining the outcomes of Vision Transformer, Convolution Neural Network, and 2pcf, we achieved a substantial reduction in error compared to the 2pcf alone. To better understand the inner workings of the machine learning algorithms, we employed the Grad-CAM method to investigate the sources of essential information in heatmaps of the CNN and ViT. Our findings suggest that the algorithms focus on different parts of the density field and redshift depending on which parameter they are predicting. This proof-of-concept work paves the way for incorporating deep learning methods to estimate cosmological parameters from large-scale structures, potentially leading to tighter constraints and improved understanding of the Universe.

Original languageEnglish
Article number075
JournalJournal of Cosmology and Astroparticle Physics
Volume11
Issue number75
DOIs
StatePublished - 1 Nov 2023

Keywords

  • Machine learning
  • cosmological parameters from LSS
  • dark energy experiments
  • galaxy clustering

Fingerprint

Dive into the research topics of 'The universe is worth 643 pixels: convolution neural network and vision transformers for cosmology'. Together they form a unique fingerprint.

Cite this