Cosmologists use massive computer simulations to verify that models of the formation of the universe match observational data from telescopes. The massive data from these simulations create a number of opportunities for visualization, such as: 1) The Epoch of Reionization, which starts when stars first appeared after the big bang and radiation from these stars started ionizing hydrogen. This problem is similar to volume rendering but different enough that current volume rendering APIs cannot be used directly. How do we change graphics APIs to solve the reionization problem? 2) The data generated by these simulations is so large that only a few timesteps can be saved by the simulations. In post-hoc visualization, a significant amount of data wrangling is required to convert data from simulation format to something that visualization tools understand, requiring several iterations until the results are useful to the science team. Creating customizable workflows to achieve automatic visualization is an interesting research/application problem. In addition, how do we create pipelines for insitu visualization given the restrictions on memory and resources of simulation codes? 3) Validation is important in the field of cosmology, and some cosmologists compare Lagrangian cosmology simulations to Eulerian ones. How do we compare visualizations from these two different topologies? Can we assume that a point cloud volume rendering is equivalent to a grid based volume rendering?