Google has made a strong appeal to the U.S. government to establish federal legislation on artificial intelligence (AI) that would promote innovation while ensuring privacy and security. The tech giant has emphasized the need for a unified framework that addresses these critical issues. In its policy proposal, Google highlighted the lack of control developers have over how AI models are used, urging the government to recognize and address this challenge. Additionally, Google is seeking to solidify the right for itself and its competitors to train AI models on publicly available data.
Concerns Over Funding and Export Controls
The company has expressed concerns over recent federal efforts to curtail spending and eliminate grant awards, urging for “long-term, sustained” investments in foundational domestic research and development (R&D). Google cautioned against imposing burdensome obligations on AI systems that could stifle innovation and competitiveness. It also criticized certain export controls implemented under the Biden Administration, arguing that they could hinder economic goals by placing disproportionate burdens on U.S. cloud service providers.
In its policy proposal, Google underscored the importance of balanced export controls that protect national security while facilitating U.S. exports and international business activities. The company argued that “fair use and text-and-data mining exceptions” are essential for AI development and scientific innovation in related fields.
“These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders,” Google stated.
Google’s approach to AI training has not been without controversy. The company has trained several models on public, copyrighted data and is currently facing lawsuits from data owners who claim they were not notified or compensated for the use of their data. Despite this, Google continues to endorse weaker copyright restrictions, asserting that such measures are necessary to foster AI innovation.
“Even in cases where a developer provides a model directly to deployers, deployers will often be best placed to understand the risks of downstream uses, implement effective risk management, and conduct post-market monitoring and logging,” Google explained.
The tech giant also voiced concerns over disclosure requirements similar to those being considered by the European Union, describing them as “overly broad.” Google believes that these requirements could impede the progress of AI technology.
Author’s Opinion
Google’s push for less restrictive AI laws is understandable given its role in the tech industry, but its resistance to stronger copyright protections raises important ethical concerns. While the company advocates for more freedom in training AI models, it seems to downplay the importance of compensating data owners for the use of their work. A more balanced approach to regulation is needed—one that encourages innovation while protecting the rights of content creators and ensuring accountability for AI systems
Featured image credit: rawpixel
Follow us for more breaking news on DMR