ChatGPT Draws Lawmakers’ Attention to Artificial Intelligence

Federal agencies are constantly seeking innovative, secure solutions to improve their operations and better serve constituents. One of the technologies that has caught their attention in the past decade is artificial intelligence (AI). With the more recent push the federal workforce has become captivated with the potential of AI and how it can empower them to effectively carry out agency missions.

Large Language Model (LLM), which is a form of AI, and is used in the now very popular ChatGPT tool and has already been found to be a useful tool for lawmakers to aid in writing speeches. Government agencies have begun to investigate the countless other benefits of implementing the technology within existing processes, including assisting employees in communicating, streamlining workflows, and increasing employees’ access to information.

Amid the excitement, there are concerns that ChatGPT and other LLM tools could eliminate jobs, provide inaccurate information, and perpetuate bias. While some fears may be unfounded, it’s crucial that federal agencies consider all the potential impacts a full-scale implementation of ChatGPT may have on the workforce and agency stakeholders.

Government agencies are responsible for protecting the safety and well-being of all citizens, including the federal workforce. Some fear that using ChatGPT to assist the workforce will lead to real humans losing their jobs or as I mentioned above, accidentally perpetuating biases.

In April, the Biden administration said that it was seeking public comments on potential accountability measures for artificial intelligence systems as questions loom about its impact on national security and education. ChatGPT has attracted U.S. lawmakers' attention as it has grown to be the fastest-growing consumer application in history with more than 100 million monthly active users.

The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants input as there is "growing regulatory interest" in an AI "accountability mechanism."

The agency wants to know if there are measures that could be put in place to provide assurance, "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

President Joe Biden expressed that it remained to be seen whether AI is dangerous but "tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said.

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.

To do so requires a thorough understanding of where ChatGPT can help serve the federal workforce and, more importantly, where it can’t. Traditionally, AI assists government employees with internal processes by helping at service desks, streamlining decisions, automating repetitive tasks, and more. While ChatGPT can fulfill those traditional roles, its large language model also enables a new dimension of capabilities.

The technology can provide federal workforce enhanced training and professional development opportunities by creating online courses, tutorials, and other educational resources that federal employees can access anytime. Alternatively, it could improve employee access to information about policies, procedures, regulations, and work-relevant data and statistics.

ChatGPT also has the potential to assist in solving well-documented government-wide challenges. For example, it can help streamline the complex federal acquisition process by drafting a government contract that employees can edit instead of creating from scratch. The potential applications are not lost on agencies, as the Department of Defense is already in the process of creating a similar AI-powered contract-writing solution known as “AcqBot” to accelerate workflows.

While the original data it pulls from may be accurate, reducing and adjusting the data to answer the unique prompt may generate the most likely statement instead of an accurate one, resulting, at times, in entirely made-up sources. In the event of an error, it won’t acknowledge its mistake but restate its response. This may lead to potential misinterpretations by federal employees relying on the tool to obtain information on benefits or formatting errors in contracts drafted using the tool.

On top of that, if an employee unknowingly uses that incorrect information to make a decision that results in a negative response, they may suffer a loss of time, resources, and even reputation. And so, it is essential to ensure that ChatGPT is properly tested and monitored, and that employees are trained to minimize the risk of errors before using the system.

Prompts play a crucial role in guiding generative AI technologies like ChatGPT to produce useful and relevant responses; I found this source from GovTech that might prove useful for those who are looking to leverage the technology in the workplace: https://www.govtech.com/artificial-intelligence/chatgpt-example-prompts-for-state-and-local-government


About the Author: Taylor Genter

Taylor is a Marketing Manager at Extract with experience in data analytics, graphic design, and both digital and social media marketing. She earned her Bachelor of Business Administration degree in Marketing at the University of Wisconsin- Whitewater. Taylor enjoys analyzing people’s behaviors and attitudes to find out what motivates them, and then curating better ways to communicate with them.