The U.S. National Security Council has recently issued its inaugural memo on artificial intelligence (AI), directing federal agencies to employ the “most advanced” AI systems while also weighing the potential risks associated with this emerging technology.
The U.S. approach to harnessing the power of AI for national security and foreign policy purposes is outlined in the National Security Memorandum (NSM). Senior administration officials emphasize the importance of America leading the way in seizing the promise and managing the risks of AI.
According to officials, the directive is to ensure that agencies have access to and utilize the most powerful AI systems. This often requires significant procurement efforts.
The NSM, which President Biden signed, establishes the AI Safety Institute within the Department of Commerce. This institute has already guided the development of AI systems and has formed agreements with companies to conduct tests on new AI systems before their public release.
The Biden-Harris administration made history on Thursday by releasing the very first memo on artificial intelligence.
During a visit to the National Defense University in Washington, national security adviser Jake Sullivan emphasized the significance of the country’s inaugural strategy in utilizing and mitigating the risks associated with artificial intelligence (AI) to enhance national security. Sullivan stated, “This marks a crucial milestone in our efforts to leverage AI for the benefit of our nation.”
U.S. National Security Advisor Jake Sullivan expressed the significance of the framework in enabling the United States to effectively utilize the potential of AI while also addressing the potential risks associated with this emerging technology.
Artificial intelligence has made significant strides in recent years, garnering praise for its potential to revolutionize various industries. This includes the military, national security, and intelligence sectors.
However, the use of this technology by governments carries certain risks. There is a concern that it could be exploited for mass surveillance, cyberattacks, or even the development of lethal autonomous devices.
The framework unveiled on Thursday also includes restrictions on the use of national security agencies, specifically prohibiting applications that could infringe upon constitutionally protected civil rights or any system that would automate the deployment of nuclear weapons.