Companies using generative artificial intelligence could be putting confidential information at risk. – Team8, an Israel-based venture firm, reports that generative AI is putting corporate secrets, such as client information and intellectual property, at risk. These generative AI tools are vulnerable to data leaks, which could lead to lawsuits. In addition, any confidential information fed into the AI tools during the learning process may be impossible to delete. This has a critical legal impact when considering personally identifiable information and associated regulations in Europe (GDPR), Canada (PIPEDA), and California (CCPA).
Team8 does clarify that current large language models do not use AI chatbot user input to self-train; however, the same may not be true of future models. Using generative AI opens enterprises to an additional security risk, such as a system or infrastructure data breach. Integrating generative AI into commonly used tools, like Microsoft Office via Copilot, and enterprise tools, such as ERP and CRM systems, opens the risk of inadvertently sharing sensitive corporate data.