Autonomous Vehicles have the potential to save millions of lives and increase the efficiency of transportation services. However, the successful deployment of AVs requires tackling multiple challenges related to modeling and certifying safety. State-of-the-art decision-making methods usually make use of end-to-end approaches, where raw high-dimensional inputs are passed to learning models with the expectation that the agent learns directly from the data; or imitation learning approaches that use human driving examples for training the AVs. Despite extensive efforts to date by AV developers, these methods still have failure levels that pose significant safety risks, giving rise to the necessity of risk-aware AVs that can better predict and handle dangerous situations. Furthermore, current approaches tend to lack explainability due to their reliance on end-to-end Deep Learning, where significant causal relationships are not guaranteed to be learned from data. This paper introduces a novel risk-aware framework for training AV agents using a bespoke collision prediction model and Reinforcement Learning (RL). The collision prediction model is based on Gaussian Processes and vehicle dynamics, and is used to generate the RL state vector. The use of an explicit risk model increases the post-hoc explainability of the AV agent, which is vital for reaching and certifying the high safety levels required for AVs and other safety-sensitive applications. Experimental results using a simulator and state-of-the-art RL algorithms show that risk-awareness can decrease collision rates, and that makes AVs more robust to sudden harsh braking situations. The proposed collision prediction model also outperforms other collision models in the literature. Moreover, the risk-aware RL-based framework achieves better performance in both safety and speed when compared to a standard rule-based method, the Intelligent Driver Model (IDM).