AI technology like ChatGPT could be a threat to U.S. national security, aiding efforts like international terrorism, says Craig Martell, who serves as the US Department of Defense’s chief digital and AI officer.
Martell spoke on May 3 2023 at the Armed Forces Communications and Electronics Association (AFCEA)’s TechNet Cyber conference in Baltimore.
He told attendees that ChatGPT’s ability to fabricate information, like academic essays, is a huge problem.
“My fear is that we trust it too much without the providers of [a service] building into it the right safeguards and the ability for us to validate the information,” Martell said. “I’m scared to death. That’s my opinion.”
Martell described ChatGPT as a “perfect tool for disinformation” because it has “been trained to express itself in a fluent manner. It speaks fluently and authoritatively.” He noted that because of this, people will believe content created by the chatbot “even when it’s wrong.”
The official said there is an urgent need for weapons to combat the AI threat: “We really need tools to be able to detect when that’s happening and to be able to warn when that’s happening. And we don’t have those tools.”
One of the first political bloggers in the world, Oliver Willis has operated OliverWillis.com since 2000. Contributor at Media Matters for America and The American Independent. Follow on Twitter at @owillis. Full bio.