Intellectual Property Meets GenAI in the Battle for Ownership
Intellectual property such as sensitive company information stands at a risk of loss when GenAI is adopted within organizations.
Intellectual Property Meets GenAI in the Battle for
Ownership
IP Protection,AI Firewall,GenAI,Source Code,Top 5 LLM Security Risks
ByAditya Soni
March 1, 2024
Summary
GitHub faced a lawsuit for training Copilot on open-source code without respecting licensing terms, and Samsung banned AI tools after employees’ use led to potential data breaches, underscoring the pressing IP and privacy issues with generative AI use.
Even with Intellectual property laws in place, IP and security emerge as top concerns for Fortune 500 companies using generative AI.
Companies are losing sight of IP protection as they prioritize rapid innovation and market expansion over the meticulous management of their intellectual assets, risking the dilution of their most valuable assets in the process.
What do we learn from Google, GitHub and Samsung about Intellectual Property rights?
The GitHub case isn’t the first-time technology and IP have crashed into each other. Google successfully defended itself against a lawsuit by arguing that transformative use allowed for the scraping of text from books to create its search engine.
The first Open Source Copyright Lawsuit challenged GitHub copilot
According to the pursuers, by training their AI systems on public repositories the defendants have violated the rights of many developers who posted code under different open-source licenses that require attribution, including the MIT, GPL, and Apache licenses.
Samsung had banned usage of AI tools
Samsung employees, in an effort to address work-related challenges, inputted sensitive company data into ChatGPT, including proprietary source codes and confidential meeting transcripts. Such actions were taken to leverage the AI’s capabilities for troubleshooting and optimizing work processes but inadvertently risked significant data exposure and security breaches.
These events have underscored the unique risks posed by generative AI technologies for Intellectual property. This misuse of generative AI highlights the critical need for stringent data protection measures and adherence to privacy protocols.
What do Fortune 500 companies think about GenAI and Intellectual Property?
A recent Acrolinx survey of Fortune 500 companies found Intellectual property concerns to be prominent for enterprise leaders considering generative AI technology.
Asked what their primary concern about the use of generative AI is, 25 of the 86 respondents selected intellectual property as their biggest concern.
This was followed closely by customer security compliance concerns, which received 23 votes. Comparatively, the other three answers (bias and inaccuracy, privacy risk, and public data availability and quality) received between nine to 16 votes each.
A total of 86 companies participating in a survey may not seem to make for a statistically significant study, but that number represents 17% of the Fortune 500 universe, which is a significant representation (statistics show a viable sample size at a minimum of 5%)
Why are companies losing sight of IP protection? 5 steps to mitigate risks of IP loss
In today’s competitive market, the race to generate revenue often leads companies into a precarious situation, particularly when it involves the intricate world of intellectual property (IP) and its intersection with generative artificial intelligence (GenAI). As businesses strive to innovate and capitalize on their unique IP amidst stressful market conditions, a new challenge emerges: safeguarding the ownership and value of their intellectual creations. This urgency to push products to market, license IP to third parties, or develop cutting-edge software-as-a-service (SaaS) offerings can inadvertently result in the dilution of a company’s most valuable assets.
Businesses can adopt several strategies to avoid the loss of intellectual property (IP) when using Generative AI (GenAI) technologies. Given the complex and evolving nature of IP law in relation to AI, these measures are essential to mitigate risks and ensure compliance:
Limited Access: To minimize the risk of data breaches or unauthorized access, only limited access of project files and sensitive information should be confined to those directly engaged in the project’s development.
Clear Ownership Agreement: Before any work commences, clear contracts should be established, outlining the specific ownership rights of project intellectual property (IP), ensuring clarity and protection for all parties involved.
Secure Infrastructure: To protect against potential cyber threats, the maintenance of robust security measures, such as firewalls, encryption protocols, secure servers, and regular software updates, should be prioritized.
Non-Disclosure Agreements (NDAs): To safeguard confidential information and trade secrets, NDAs should be signed, creating a legally binding commitment to protect sensitive data.
Version Control Systems: Utilizing version control systems, like Git or SVN, should be a standard practice to monitor development changes, ensuring that all previous versions of the work are securely archived and retrievable when necessary.
Lumeus.ai offers Zero Trust Security for AI, enabling IT Security to efficiently manage ShadowAI, control AI access, and enforce AI guardrails. It integrates seamlessly with existing security infrastructures, supporting identity platforms like Okta, Google, Active Directory, and network security platforms from Palo Alto, ZScaler, Fortinet, enabling a smooth deployment.
If you’re interested in a deeper discussion or even in contributing to refining this perspective, feel free to reach out to us.
Unlock Zero Trust Security for
GenAI and Data Access
Request a Demo