Paolo Ardoino, CEO of blockchain platform Tether, claims that only if AI models are localized will people and privacy be protected.
Ardoino said that locally executable AI models would also ensure the resilience and independence of the models.
He pointed to the latest technology available to users, such as smartphones and laptops, saying they contain enough power to “fine tune general large language models (LLMs) with user’s own data, preserving enhancements locally to the device.”
The X post ended with “WIP,” the common acronym for “work in progress.”
Speaking to Cointelegraph, Ardoino said that locally executable AI models represent a “paradigm shift” in user privacy and independence:
“By running directly on the user’s device, be it a smartphone or laptop, these models eliminate the need for third-party servers. This not only ensures that data stays local, enhancing security and privacy, but also allows for offline use.”
He said this allows users to enjoy ”powerful AI-driven experiences and data analysis” while still maintaining complete control over their information.
Tether recently announced its expansion into AI, to which Ardoino said the company is “actively exploring” integration of locally executable models into its AI solutions.
This comes in response to a recent hack experienced by ChatGPT developer OpenAI.
According to reports, in early 2023, a hacker retrieved access to OpenAI’s internal messaging systems. Through this access, they were able to compromise the details of the company’s AI design.
Two sources close to the matter said the hacker stole details from sensitive discussions between OpenAI employees about the company’s technologies.
Related: Big Tech faces financial reckoning if human-level AI doesn’t happen soon
Additionally, users of ChatGPT were in for a wake-up call after it was revealed that conversations of macOS users were not encrypted but stored in plain-text files.
This comes shortly after Apple announced that it will integrate ChatGPT with its new “Apple Intelligence” lineup of AI-powered products and services.
The issue has reportedly already been resolved, though the community is still questioning why it happened in the first place. Some hypotheses have been that it would be easy for OpenAI to access the chat logs to further develop and train ChatGPT.
OpenAI is one of the major AI developers currently releasing high-level AI models, alongside Google, Meta and Microsoft. Industry analysts and even governments around the world have been concerned about what a Big Tech monopoly over AI models could mean for users in terms of privacy and control of data.
Many have even called for the decentralization of AI, with initiatives like 6079 advocating an explicit challenge to Big Tech’s dominance in the AI industry for a fairer future.
Magazine: ‘Raider’ investors are looting DAOs — Nouns and Aragon share lessons learned