(ORDO NEWS) — Almost a century has passed since astronomer Fritz Zwicky first calculated the mass of the Coma Cluster, a dense galaxy cluster containing almost 1,000 galaxies located in the nearby universe.
However, the estimation of the mass of such a massive object, not to mention its gigantic size and density, as well as its huge distance of 320 million light years, is associated with a number of difficulties – not solved then and even now.
The first measurements made by Zwicky, as well as many calculations carried out after him, suffer from sources of error that cause the resulting estimate to be biased up or down.
In a new study, a team of physicists led by Matthew Ho of Carnegie Mellon University in the United States used machine learning tools to develop a deep learning method to accurately estimate the mass of the Coma Cluster and effectively eliminate these sources of error.
To calculate the mass of the Coma Cluster, Zwicky’s team and subsequent scientific teams used a dynamical mass measurement based on studying the movement of objects belonging to the cluster as well as orbiting it, and then using modern understanding of gravity to calculate the mass of the cluster of galaxies.
However, when using this approach, researchers inevitably encounter a number of errors. Clusters of galaxies exist as nodes in a giant “web” of material distributed throughout the universe, and they are constantly colliding and merging with each other, resulting in a distortion of the velocity profiles of the galaxies that make up the cluster.
And since astronomers observe the cluster from a great distance, there are a large number of other objects lying in the space between a cluster of galaxies and an observer on Earth, which can be mistaken for cluster objects and affect the measurement result, leading to a shift in the estimate up or down.
Recent research has shown great progress in quantifying and accounting for the impact of such inaccuracies, but machine learning-based techniques offer an innovative data-driven approach, according to Ho.
“Our deep learning method finds out which of the real data are useful measurements and which are not,” said Ho, adding that their method eliminates errors from overlapping galaxies (selection effects) and takes into account the different shapes of galaxies (physical effects). “The use of these data-driven methods allows us to improve the quality and automation of our forecasts.”
Ho’s machine learning method is based on an adaptation of a well-known machine learning method called convolutional neural network, which is one type of deep learning algorithm used in pattern recognition. The researchers “trained” their model by feeding it data from cosmological simulations of the universe.
The model was trained on synthetic “observation data” of thousands of galaxy clusters, the mass of which was known in advance.
After in-depth analysis of how the model worked with these synthetic data, Ho and his colleagues applied the model to real-world data analysis resulting in estimates of the mass of the Coma Berenices cluster that are close to estimates that researchers have been getting since the 1980s.
According to the authors, the results obtained allow us to confirm the adequacy of the model used.
—
Online:
Contact us: [email protected]
Our Standards, Terms of Use: Standard Terms And Conditions.