In the rapidly evolving artificial intelligence landscape, the emergence of foundational models such as GPT-4 and Llama 2 has transformed numerous fields, influencing decision-making and shaping user experiences globally. However, despite their widespread use and impact, there are growing concerns about the lack of transparency in these models. This problem is not limited to AI. This reflects the transparency issues faced by previous digital technologies, such as social media platforms, where consumers struggled with deceptive practices and misinformation.
Baseline Model Transparency Index: A new assessment tool
To address this important issue, the Foundation Model Research Center at Stanford University, together with collaborators at MIT and Princeton, developed the Foundation Model Transparency Index (FMTI). This tool aims to rigorously assess the transparency of underlying model developers. FMTI is designed with approximately 100 indicators across three broad areas: upstream (including the components and processes involved in building the model), model (detailing its properties and functionality), and downstream (focusing on deployment and use). This comprehensive approach allows for a nuanced understanding of the transparency of the AI ecosystem.
Key findings and implications
Applying FMTI to 10 major foundational model developers yields sobering results. The highest score was just 54 out of 100, indicating a fundamental lack of transparency across the industry. The average score was only 37%. While open-based model developers, who allow downloadable model weights, lead the way in transparency, closed-model developers have lagged, especially in upstream issues such as data, labor, and compute. These results are critical to consumers, businesses, policymakers, and academics who rely on understanding the limitations and capabilities of these models to make informed decisions.
Towards a transparent AI ecosystem
FMTI’s insights are critical to guiding effective regulation and policy decisions in the field of AI. Policymakers and regulators demand transparent information to address issues such as intellectual property, labor practices, energy use, and AI bias. For consumers, understanding the underlying model is essential to recognize its limitations and seek redress for harm caused. By bringing these facts to the surface, FMTI sets the stage for needed change in the AI industry and paves the way for more responsible behavior by foundational model companies.
Conclusion: Continuous improvement is needed
As a pioneering initiative, FMTI highlights the urgent need for greater transparency in the development and application of AI-based models. As AI technology continues to advance and integrate into various industries, it is essential that the AI research community works together with policymakers to increase transparency. These efforts will not only strengthen trust and accountability in AI systems, but also ensure that they are aligned with human values and societal needs.
Image source: Shutterstock