by Alina Y. Efremenko
One of my memorable moments when I was a teenager was my family’s trip to the Grand Canyon. That was my first experience riding a horse. I begged my grandfather to come with me, and reluctantly he agreed. I asked him why he disliked the idea so much and he said, “No one asked the horses.” Fast forward many years to today.
In the past, multiple regulatory agencies, such as the FDA or the EPA, have relied on animal testing for use in risk assessment. Historically, this approach is understandable because we have had limited tools for understanding what effects chemicals have on the body and the properties of these chemicals. However, with the evolution of computational capabilities, these practices have evolved.
In 2009, the European Union banned the testing of cosmetic ingredients on animals. Currently, many industries and regulatory agencies are moving in that same direction; for example, the US EPA is reducing the number of animal studies and funding by 30% by 2025, with plans to eliminate it by 2035. This push is not only a step in the right direction ethically but also a more appropriate direction due to the sheer number of chemicals and the cost and time of animal studies. In addition, no one is asking the animals if they want to participate in toxicology studies.
A valuable tool in dealing with the number of chemicals to be tested was developing the Threshold of Toxicological Concern (TTC), which classifies chemicals based on their structure to determine health risks associated with exposure. The TTC organizes chemicals into three classes: low, intermediate, or high concern. Each class is assigned a safe exposure level — a TTC value — based on the previously observed toxicity of structurally similar compounds. The idea behind this risk-based approach is if this TTC value is below levels of actual possible human exposure, then this chemical is not a potential risk to humans. We describe the process of classifying chemicals here.
For the past five years, ScitoVation collaborated with Cosmetics Europe, the American Chemistry Council, the Research Institute for Fragrance Materials (RIFM), and others working on deriving the internal Threshold for of Toxicological Concern (iTTCs); a perspective article by Ellison et al. (2021) describes this effort. The iTTC project is the next step of using the TTC for risk assessment. We use clearance data coupled to PBPK modeling to estimate the internal plasma concentration associated with the oral toxicity NOAEL for chemicals in the TTC database. This evolution and extension of the TTC approach to develop iTTC values can help with evaluating different routes of exposure and metabolism. We include a broad distribution of chemicals in the iTTC project to ensure that it results in a robust risk assessment tool applicable to diverse chemical properties.
Model development and testing are very much dependent on existing data and/or new data collection. The more information that we can use to inform the model, the more we can refine it. Therefore, when developing a model that can be used for a large set of chemicals, the more data, the better. Initially, a literature search on over 1200 chemicals streaming from three TTC databases (Munro, COSMOS, and RIFM) was completed. This initial review established which chemicals have existing in vivo and in vitro ADME and PK data. Using this data, an initial rat oral PBPK model, was developed. This model uses in vitro metabolic clearance data to estimate in vivo plasma concentrations. For substances in this set of 1200 chemicals that lack readily available clearance data, work is ongoing to collect new in vitro metabolic clearance data and use this data to parameterize the PBPK modeling in the Population Life-course Exposure to Health Effects Model (PLETHEM) platform . If we use current knowledge and models to understand the risk associated with chemical exposure better, then in the future, chemical plasma concentrations can be predicted based on chemical properties and likely routes of exposure.
Technology is advancing and improving rapidly, providing the regulatory science community with greater computational power —power that can be applied to move away from reliance on traditional, costly and time consuming testing in laboratory animals to establish safe exposure levels to using more efficient and scientifically robust inference tools, like the TTC and eventually, the iTTC. We hope by showing computational capabilities and their use for risk assessment, we can help inspire confidence in this type of assessment and elevate to the use of more in vitro and in silico methods for safety evaluations and risk assessments. For further information, please email me.
Featured Image: CCF now Cruelty Free International (@crueltyfreeintl), 2017, November 21. [Tweet: Alternatives to Animal Testing image][Twitter] https://twitter.com/CrueltyFreeAus/status/933107489490444288?s=20