if you're not sure how AI models actually work and whether they're safe to use, you're not alone.
In an effort to pull back the curtain on black-box tools like ChatGPT, Stanford University unveiled a new rubric focused on transparency, dubbed the Foundational Model Transparency Index (FMTI). Created in collaboration with MIT and Princeton, the FMTI hopes to incentivize AI companies to be more upfront about their systems.
"While the societal impact of foundation models is growing, transparency is on the decline, mirroring the opacity that has plagued past digital technologies like social media," says Sayash Kapoor, who co-authored the study. Less transparency makes it harder "for consumers to understand model limitations or seek redress for harms caused," adds a university blog post.
The FMTI ranks 10 top AI models on 100 different transparency-related dimensions. That includes, for example, how they were built, the information they were trained on, and the computational resources required. It also considers policies about the use of the model, data protection, and risk mitigation.
For a full list of the metrics and the methodology, check out the study's accompanying 110-page paper.
The mean score across all models was just 37, or 37%. Unimpressed, the study says none of the models' scores are "worth crowing about." None are close to providing adequate transparency.
Meta's Llama 2 model claimed the top spot, with a score of 54 out of 100 points. “We shouldn’t think of Meta as the goalpost with everyone trying to get to where Meta is,” says Rishi Bommasani, a PhD student who led the effort under the university's Center for Research on Foundation Models. “We should think of everyone trying to get to 80, 90, or possibly
Read more on pcmag.com