A university in the United States received $20 million in federal funding to go towards the creation of a new AI institute.
According to a report from a local news publication, the university of Carnegie Mellon in Pittsburgh, Pennsylvania received the funding for its new AI Institute for Societal Decision Making.
The institute will foster the improvement of AI tools to assist in decision-making procedures in social circumstances such as natural disasters and public health events. Aarti Singh, a professor at the university’s machine learning department will serve as the institute’s director.
She said one of the primary goals will be to create AI that is “human-centric.”
“We need to develop AI technology that works for the people… It's actually built on data that is vetted, algorithms that are vetted, with feedback from all the stakeholders and participatory design.”
Singh explained that she believes AI can play a vital role in helping decision makers and officials make more informed decisions in different scenarios.
Researchers at the institute will be consulting public health officials, emergency managers and community workers, along with behavioral and cognitive scientists while developing and training the new technology.
Related: BlockGPT launches ‘chat to earn’ ecosystem for training AI
Additionally, Singh pointed out that the ethical use of AI is a “central goal” at the institute, and developers and researchers must be “careful” in the process.
“I think one of the key things is making sure that we are engaging with AI in an ethical way so that it is deployed when it's needed.”
This comes as governments around the world begin to examine the use of AI for policies and the regulations needed to keep it in place. In Romania, the government recently unveiled an AI chatbot that will crowdsource public engagement and needs to help inform policy decisions.
Other global leaders in countries such as the United States and China are contemplating new regulations for the technology. In the European Union, lawmakers are in the process of finalizing a new AI Act focusing on guidelines for generative AI tools.
Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?