US Fed CIO says it’s ‘hard to justify’ hiring human coders over AI

Leaders at the U.S. Federal Reserve indicated that AI would replace future human programmers at the agency as it explores new use cases.
Leaders at the U.S. Federal Reserve indicated that AI would replace future human programmers at the agency as it explores new use cases.

Leaders at the United States Federal Reserve appear convinced that generative artificial intelligence (AI) tools will function as a “super analyst” for banks and the government — one capable of handling customer service duties for banks and replacing human programmers. 

Sunayna Tuteja, the Federal Reserve’s chief innovation officer, recently spoke at the Chicago AI Week event in a fireside chat with Margaret Riley, SVP at the payments unit of the Fed’s financial services division.

The topic of discussion was “Advancing responsible AI Innovation at the Federal Reserve System.” According to a report from financial news and analysis outlet Risk.net, Tuteja and Riley discussed five use cases for generative AI being explored by the Fed: data cleansing, customer engagement, content generation, translating legacy code and enhancing operational efficiency.

An AI “super analyst”

Riley described the overall potential for generative AI as that of a “super analyst,” with the ability to make life easier for workers at the Fed and as a customer support specialist that could personalize and enhance banks abilities to interface with clients.

On the subject of “translating legacy code,” Tuteja appeared to lean into the idea that large language models (LLMs), such as ChatGPT or similar AI products, could replace some jobs traditionally reserved for humans:

“It’s hard to justify [hiring] coding developers to update all old code to new code, but now you can leverage LLMs and then the developer becomes the auditor or the editor versus the primary doer.”

Dangers and drawbacks

The two were careful to stress that generative AI and LLMs have their limitations and that the use cases discussed were exploratory at the present time.

While the risks of inserting generative AI systems into technology sectors where accuracy is important — such as finance — are well-documented, Tuteja issued a stark warning on the possible drawbacks of not implementing them:

“We should think about all the risks of doing something new, but we should also ask ourselves: what is the risk of not doing something? Because sometimes the risk of inaction is greater than the risk of action, but the way to go forward has to be responsible.”

Related: Crypto losses to deep fakes could reach $25B in 2024 — Bitget