Argentina is set to develop a specialized task force using artificial intelligence to identify and deter future crimes within the nation.
The Ministry of Security of Argentina has announced the creation of the Applied Artificial Intelligence for Security Unit (UIAAS), led by the director of cybercrime and cyber affairs, along with members from the Argentine Federal Police and security forces.
One of the main tasks of the group will be to “use machine learning algorithms to analyze historical crime data to predict future crimes and help prevent them,” according to a recent statement by the Ministry of Security of Argentina.
Identifying unusual patterns in computer networks
The crimes of interest for the UIAAS range widely. A major focus is identifying potential cyber threats by detecting “unusual patterns in computer networks,” including malware, phishing and other types of cyberattacks.
It will also handle more dangerous tasks, such as bomb disposal, and attempt to improve the speed of communication between the police and relevant security teams.
Monitoring social media activity was also mentioned as a method to detect any signs of communication about potential future crimes.
“Analyze social media activities to detect potential threats, identify criminal group movements, or foresee disturbances,” it stated.
Not everyone is convinced
Some have taken to social media to argue that it may not be beneficial in the long run.
Well-known American software engineer Grady Booch claimed it “will not end well” to his 165,500 X followers in an Aug. 2 post.
“Argentina is using AI to fight crime, but at what cost to privacy?” computer software engineer David Arnal commented.
“Once again, where are the Milei supporters on this one?” author Derrick Broze added.
It follows recent news that the United States government is investigating OpenAI, the creator of ChatGPT, to gain more insight into its safety standards.
On July 23, Democratic Party members of the United States Senate and one independent lawmaker sent a letter to OpenAI CEO Sam Altman regarding the company’s safety standards and employment practices toward whistleblowers.
Related: Nvidia delays next gen AI chip as investors issue ‘bubble’ warning
The most significant portion of the letter, first obtained by The Washington Post, was item nine, which read, “Will OpenAI commit to making its next foundation model available to the U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?”
Meanwhile, the United Nations General Assembly recently endorsed a resolution around AI.
The resolution — initiated by the United States and backed by 123 countries, including China — was adopted on March 2, encouraging countries to safeguard human rights, protect personal data, and monitor AI for risks.
Magazine: Criminal at Bitcoin 2024, BTC Strategic Reserve Bill, and more: Hodler’s Digest, July 28 – Aug. 3