-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
5867977
commit cca9f01
Showing
1 changed file
with
34 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,2 +1,34 @@ | ||
search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202406112000+TO+202406172000]&start=0&max_results=5000 | ||
<h1>New astro-ph.* submissions cross listed on cs.AI, physics.data-an, stat.*, cs.LG staritng 202406112000 and ending 202406172000</h1>Feed last updated: 2024-06-17T00:00:00-04:00 | ||
search_query=cat:astro-ph.*+AND+lastUpdatedDate:[202406122000+TO+202406182000]&start=0&max_results=5000 | ||
<h1>New astro-ph.* submissions cross listed on physics.data-an, cs.LG, cs.AI, stat.* staritng 202406122000 and ending 202406182000</h1>Feed last updated: 2024-06-18T00:00:00-04:00<a href="http://arxiv.org/pdf/2406.10771v1"><h2>Predicting Exoplanetary Features with a Residual Model for Uniform and | ||
Gaussian Distributions</h2></a>Authors: Andrew Sweet</br>Comments: 19 pages, 7 figures, Conference proceedings for ECML PKDD 2023</br>Primary Category: astro-ph.EP</br>All Categories: astro-ph.EP, astro-ph.IM, cs.LG, physics.data-an</br><p>The advancement of technology has led to rampant growth in data collection | ||
across almost every field, including astrophysics, with researchers turning to | ||
machine learning to process and analyze this data. One prominent example of | ||
this data in astrophysics is the atmospheric retrievals of exoplanets. In order | ||
to help bridge the gap between machine learning and astrophysics domain | ||
experts, the 2023 Ariel Data Challenge was hosted to predict posterior | ||
distributions of 7 exoplanetary features. The procedure outlined in this paper | ||
leveraged a combination of two deep learning models to address this challenge: | ||
a Multivariate Gaussian model that generates the mean and covariance matrix of | ||
a multivariate Gaussian distribution, and a Uniform Quantile model that | ||
predicts quantiles for use as the upper and lower bounds of a uniform | ||
distribution. Training of the Multivariate Gaussian model was found to be | ||
unstable, while training of the Uniform Quantile model was stable. An ensemble | ||
of uniform distributions was found to have competitive results during testing | ||
(posterior score of 696.43), and when combined with a multivariate Gaussian | ||
distribution achieved a final rank of third in the 2023 Ariel Data Challenge | ||
(final score of 681.57).</p></br><a href="http://arxiv.org/pdf/2406.10372v1"><h2>Insights into Dark Matter Direct Detection Experiments: Decision Trees | ||
versus Deep Learning</h2></a>Authors: Daniel E. Lopez-Fogliani, Andres D. Perez, Roberto Ruiz de Austri</br>Comments: 26 pages, 7 figures, 2 tables</br>Primary Category: astro-ph.IM</br>All Categories: astro-ph.IM, astro-ph.HE, hep-ex, hep-ph, physics.data-an</br><p>The detection of Dark Matter (DM) remains a significant challenge in particle | ||
physics. This study exploits advanced machine learning models to improve | ||
detection capabilities of liquid xenon time projection chamber experiments, | ||
utilizing state-of-the-art transformers alongside traditional methods like | ||
Multilayer Perceptrons and Convolutional Neural Networks. We evaluate various | ||
data representations and find that simplified feature representations, | ||
particularly corrected S1 and S2 signals, retain critical information for | ||
classification. Our results show that while transformers offer promising | ||
performance, simpler models like XGBoost can achieve comparable results with | ||
optimal data representations. We also derive exclusion limits in the | ||
cross-section versus DM mass parameter space, showing minimal differences | ||
between XGBoost and the best performing deep learning models. The comparative | ||
analysis of different machine learning approaches provides a valuable reference | ||
for future experiments by guiding the choice of models and data representations | ||
to maximize detection capabilities.</p></br> |