ChatGPT has become incredibly popular over the past five years. I spoke with Ocean Nexus Principal Investigator Dr. Richard Anderson about using Large Language Models, or LLMs, for marine policy and how we need to prioritize capacity building through equitable usage and development.
What is ChatGPT?
Ocean Nexus Innovation Fellow Matt Ziegler works with Richard Anderson to understand what implications AI tools could have on marine policy, especially related to the broader question of equity. Their soon-to-be published work examines the biodiversity beyond national jurisdiction (BBNJ) treaty under the United Nations Convention on the Law of the Sea (UNCLOS), looking at equity and concerns surrounding AI biases. Ziegler created a Question-Answering Bot to explore a set of documents about the BBNJ treaty.
AI Large Language Models, or LLMs, like ChatGPT have become extremely popular over the last few years. Organizations are starting to use them for text-processing and writing various documents. Students use these tools to write papers, which has become a new concern due to how well ChatGPT can summarize content.
According to Richard Anderson, professor at the University of Washington Department of Computer Science and Engineering, systems like ChatGPT have seemingly succeeded beyond anyone’s expectations, and they will have a significant influence on day-to-day tasks. For example, these tools have been used for question/answering systems (i.e., chat boxes) to answer health-related questions or customer service inquiries.
Essentially, LLMs work by feeding a large number of documents through a system that compresses the information and generates a representation of language based on the structures and context of the documents provided. These large language systems have been incredibly successful in translating and summarizing information.
Concerns and risks
The issues of representation and biases arise because most of the documents originate in more developed countries. Ziegler and Anderson questioned whether the AI tools that examined the data would fairly represent the interests of a broader set of people.
The risks of using LLMs are poorly understood despite the fact that policymakers are already using these systems for many tasks including drafting statements, submissions, and presentations, and conducting background research. Questions also emerge about whether LLMs actually understand the problems that they are asked or if they are just regurgitating information based on sentence structure and patterns from the data they contain. Ziegler and Anderson state that the latter may be the case: these tools can provide impressive results, but they take in a massive number of documents and process them without being able to understand them. Furthermore, technical problems are still a concern when using LLMs. Out-of-date information can be processed alongside current information; the system might not necessarily know about a more updated version of a document that was published in multiple drafts.
A big concern with respect to equity is that the information produced by and through LLMs like ChatGPT may not be representative of the interests of smaller nations. Biases (e.g., racial and gender biases) can result from design processes and training data, and are present in underlying AI language models. LLMs can also overrepresent the views of developed countries, thereby marginalizing the voices of developing states. Perhaps this is due to the fact that developed countries likely produce more online content that, in turn, is used in these language models.
Opportunities: Overcoming biases and mitigating concerns
While there are clear concerns about using AI language models in marine policy, it is important to also think about potential opportunities. “Whether we like it or not, these tools are going to become important in the next five years,” says Anderson. “So the marine policy community needs to understand how to use them effectively and how to mitigate any of the problems.”
AI language models have the potential to help policymakers better understand policy and legal documents. One of the keys to success in overcoming emerging biases will be to move beyond naïve usage of these tools; there must be a technical understanding in order to properly address issues.
Another opportunity is to build capacity in developing countries to better understand and utilize AI tools for their own interests. There is an emerging body of techniques, such as “prompt engineering” and “model customization,” which are important for giving LLMs appropriate context to address various tasks. Ziegler and Anderson’s work reflects the need for these efforts to be global. This will require educational institutions around the globe to create local training programs on the use of LLMs to ensure that the development of AI tools extends beyond the established technology hubs.
There are a lot of complexities with using large language tools for policy. However, it is likely that LLMs like ChatGPT will inevitably become a part of policymaking processes whether we like it or not. Now it becomes essential that policymakers understand how to equitably use these tools in a way that builds capacity and uplifts the voices of those who have been marginalized thus far.
This blog was edited by Leah Huff.