Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
The Biden Administration emphasized the responsibility of AI companies to ensure their products are safe for use.
On July 21, The White House announced that prominent artificial intelligence (AI) companies, such as OpenAI, Google and Microsoft, have committed to developing AI technology that is safe, secure and transparent. The White House also acknowledged other companies like Amazon, Anthropic, Meta and Inflection for committing to AI safety.
The Biden Administration emphasized the responsibility of the companies to ensure the safety of their products and harness AI’s potential while promoting high standards in its development.
Kent Walker, Google’s president of global affairs, acknowledged that achieving success in AI requires collaboration. He expressed satisfaction in joining other leading AI companies to support these commitments and assured that Google would continue to work with other companies by sharing information and best practices.
Screenshot of the statement release. Source: The White House
Among the commitments are pre-release security testing for AI systems, sharing best practices in AI safety, investing in cybersecurity and insider threat safeguards, and enabling third-party reporting of vulnerabilities in AI systems. Anna Makanju, OpenAI’s vice president of global affairs, stated that policymakers worldwide are contemplating new regulations for advanced AI systems.
In June, bipartisan United States lawmakers introduced a bill to create an AI commission to address concerns in the rapidly growing industry. The Biden Administration says it is collaborating with global partners like Australia, Canada, France, Germany, India, Israel, Italy, Japan, Nigeria, the Philippines and the United Kingdom to establish an international framework for AI.
Related: AI21 Labs debuts anti-hallucination feature for GPT chatbots
According to Microsoft president Brad Smith, the company endorses The White House’s voluntary commitments and independently commits to additional practices that align with the objectives. By doing so, Microsoft aims to expand its safe and responsible AI practices and collaborate with other industry leaders.
Global leaders, including the United Nations secretary-general, have expressed concerns about the potential misuse of generative AI and deepfake technology in conflict zones. In May, U.S. Vice President Kamala Harris met with AI leaders to establish a groundwork for ethical AI development, and announced a $140 million investment in AI research and development by the National Science Foundation.
Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.