Big Tech giant Apple has restricted company usage of the widely popular artificial intelligence (AI) chatbot ChatGPT over fears its sensitive data could be compromised.
A report by The Wall Street Journal revealed that an internal document to Apple employees had banned the usage of Microsoft-backed ChatGPT and similar AI tools while the company was developing its own AI technology.
According to the document, the iPhone developer is concerned about workers using the programs and exposing confidential company information.
It also mentioned a restriction on GitHub’s AI tool Copilot, a Microsoft-owned application that automates writing software code.
Cointelegraph reached out to Apple for further comment.
This internal ban comes after the ChatGPT app debuted for iOS in the Apple app store on May 18.
The new app is currently available for iPhone and iPad users in the United States but intends to expand to additional countries “in the coming weeks,” along with an Android version coming “soon.”
Related: Is ChatGPT king? How top free AI chatbots fared during field testing
Alongside Apple, other large companies have restricted internal usage of ChatGPT. On May 2, Samsung sent a memo to employees banning the use of generative AI tools such as ChatGPT.
In Samsung’s case, the policy followed an incident of Samsung staff uploading a “sensitive code” to the platform.
Samsung told employees who use such applications on personal devices not to upload any company information or they could face “disciplinary action up to and including termination of employment.”
In addition to Samsung and Apple, companies including JPMorgan, Bank of America, Goldman Sachs and Citigroup have also banned the internal use of generative AI tools like ChatGPT.
Many companies banning employee usage of AI chatbots are also in the process of creating their own applications. Back in early May, Apple CEO Time Cook said that the company plans to “weave” AI into its products.
Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?