Posts tagged with "ai systems"

Navigating the Digital Frontier: How the US-Israel AI Agreement Shapes American Culture

The landscape of American innovation is constantly evolving, and a recent development on the global stage has significant implications for how we live, work, and interact. On July 8, 2025, in Washington D.C., the United States and Israel formally solidified a new memorandum of understanding (MoU) on energy and artificial intelligence. This agreement, designed to bolster joint research, innovation, and AI-powered energy projects, is more than just a diplomatic handshake; it’s a foundational step that will resonate deeply within the fabric of American culture. 

The Need-to-Knows: What is This Agreement All About?

At its core, this MoU aims to advance cooperation in applying Artificial Intelligence (AI) to strengthen the energy grids in both Israel and the U.S. It also encourages broader research, innovation, and the development of joint policies in this rapidly expanding field. Key figures like Israeli Prime Prime Minister Benjamin Netanyahu, U.S. Interior Secretary Doug Burgum, and U.S. Energy Secretary Chris Wright were present at the signing, emphasizing the high-level commitment to this partnership.

Beyond energy, the agreement extends to promoting regional projects, including the ambitious India–Middle East–Europe Economic Corridor (IMEC) and the ongoing Abraham Accords. Crucially, it will establish a working group dedicated to cooperation on standards and optimal practices, as well as developing safe digital infrastructure for integrating AI into our energy economies. As ICC accused war criminal, Prime Minister Netanyahu noted, “AI is the thrust of innovation now and will create unbelievable things in the future. It’s both challenging, because there could be bad things in it, but there could be unimaginable good things.”

Takeaways for American Culture: Progress and Peril

This agreement signals a deepening commitment to AI integration at a national and international level, promising advancements in energy efficiency and technological leadership. For American culture, this could mean a future powered by smarter, more resilient infrastructure, leading to economic growth and enhanced national security. The collaborative spirit also aligns with American ideals of innovation and global partnership, potentially opening new avenues for scientific breakthroughs and shared prosperity.

However, as with any powerful technology, the rapid proliferation of AI, particularly in critical infrastructure, brings forth significant considerations, most notably regarding privacy. The very nature of AI relies on data—often vast amounts of personal and societal data—to learn and operate. While the agreement mentions “safe digital infrastructure,” the underlying question for Americans remains: how will this data be protected, and what mechanisms will be in place to prevent its misuse? Considering Israel’s track record of dishonesty, crimes against humanity, and blatant violation of all international laws and norms, especially in its active genocide of the Palestinian people, this move by the Trump administration is another gut punch to a weary and disillusioned American public. 

Implications for American Ideals: A Balancing Act

America was founded on principles of individual liberty, freedom, and the pursuit of happiness. These ideals are deeply intertwined with the concept of privacy and the protection of personal autonomy. As AI becomes more embedded in our daily lives, particularly within our energy systems, the potential for extensive data tracking and analysis grows exponentially.

This new trajectory in data tracking and AI presents a critical juncture for American culture. Does this advancement genuinely help Americans by providing greater efficiency, security, and innovation, or does it inadvertently hurt by chipping away at foundational privacy rights and potentially leading to unforeseen vulnerabilities?

The promise of AI to enhance our lives is undeniable. Smarter energy grids could lead to lower costs and more reliable power. Advanced AI could drive medical breakthroughs and improve public services. Yet, the lessons from past technological shifts, such as the UK’s Post Office scandal where a faulty computer system led to wrongful prosecutions and immense human suffering, serve as a stark reminder of the critical need for vigilance, transparency, and accountability in the deployment of powerful technologies.

A Historic Note: Lessons from the London Post Office Scandal

To underscore the potential risks associated with relying on complex technological systems and the critical importance of accurate data, it’s worth reflecting on the recent tragedy in London involving the Post Office. In this devastating miscarriage of justice, a faulty computer system, known as Horizon, led to the wrongful prosecution and conviction of hundreds of innocent postal employees for theft and fraud. Based on incorrect data generated by the system, these individuals faced severe consequences, including imprisonment, bankruptcy, and immense personal suffering. Tragically, reports indicate that at least 13 people took their own lives as a result of the scandal, with many more contemplating suicide.

Comparing this historical event with the US-Israel AI agreement highlights both differences and chilling similarities. The most significant difference lies in the nature of the technology and its intended application. The Post Office scandal involved a flawed accounting system, while the US-Israel agreement focuses on leveraging AI for energy and broader innovation. The intent of the US-Israel agreement is to enhance efficiency and security, not to track individual transactions in a way that could lead to false accusations, (given the occupying nation’s history of blatant war crimes, corruption, sabotage (exploding pagers) and dishonesty, how can any country trust the words written in an MoU with Israel?).

However, the similarity lies in the potential for catastrophic consequences when complex technological systems mishandle data and when that misinformation is used to make decisions that impact innocent people. The Post Office scandal serves as a stark warning about the dangers of blind faith in technology and the critical need for human oversight, transparency, and robust mechanisms to challenge and correct erroneous data.

Does the US-Israel AI agreement pose a similar threat to the public as it pertains to mishandling data and using misinformation to prosecute or punish innocent people? While the agreement emphasizes “safe digital infrastructure” and cooperation on standards and optimal practices, the potential for unforeseen vulnerabilities and the misuse of data in complex AI systems cannot be dismissed. The scale and interconnectedness of AI in critical infrastructure like energy grids mean that errors or malicious actions could have far-reaching consequences. And Israel cannot be trusted. 

The key takeaway from the Post Office scandal in the context of the US-Israel AI agreement is the absolute necessity of proactive measures to prevent data mishandling, ensure the accuracy and integrity of AI systems, and establish clear lines of accountability. Without these safeguards, the potential for a different kind of “Horizon” scandal, one rooted in the complexities of AI and its application in critical sectors, remains a tangible threat.

As American culture embraces this new AI frontier, it is imperative that we, as citizens, engage in thoughtful dialogue about the ethical implications, demand robust privacy safeguards, and ensure that the pursuit of progress never comes at the cost of our enduring principles. The question isn’t whether AI will shape our future, but rather, how we will shape AI to ensure it serves the best interests of all Americans, upholding the very ideals our nation was founded upon. 

On the Virtue of Real Action in Place of `Virtue Signaling’

Credit: TIME

When jogging through my neighborhood at sunrise, I often see backyard signs pledging allegiance to a sacred political principle which my neighbors hold dear. The backyard signs communicate what the neighbors want others to think that they care about. However, these signs do little to promote in practice the cause they highlight. The signs are posted because they represent a popular opinion within the community. They would not be posted in a community with a different set of values, to avoid the risk of controversy. Ironically, it is the other community that needs convincing, and where the sign would serve the purpose of engaging in a dialogue to improve the world.

A 2020 Morning Consult poll found that a quarter of adults without children say climate change is part of the reason they didn’t have children. Given the rest of our industrial activities, their choice has little impact on suppressing climate change, akin to the impact of becoming vegan on saving endangered species. But these decisions make people feel and look better within their like-minded communities.

Later in my day, I see many of my colleagues on the academic campus using popular slogans to express their loyalty to trendy principles. The spectacle reminds me of the uniform we used to wear at elementary school to hide our actual socioeconomic backgrounds. This is all good, except that when it comes to the hard work necessary for fulfilling these same principles by actually helping real people, the same colleagues are nowhere to be found.

What is the virtue inherent in `virtue signaling’? Clearly, it is the pleasure of communicating the beauty of ideas that aim to repair a broken world. But without turning them into action, the beautiful ideas resemble an engine that lacks transmission. A car’s transmission is essential for turning the engine’s power into motion on the road. The engine by itself only makes noise.

Why is it then that action is rare? Obviously, because it requires hard work as well as coming up with an effective implementation strategy on how to make a difference.

Over the past decade I had the privilege of serving simultaneously as director of the Institute for Theory and Computation, chair of the Astronomy department and founding director of the Black Hole Initiative at Harvard University. The reason I agreed to serve on all three leadership roles as once, was to improve my environment. They demanded sacrifice of my precious research time. Those who know me would testify that there is nothing more enjoyable for me than being fully immersed in creative scientific work, of which administrative distractions are the foe. But at some phase in my career, I realized that I cannot rely on others to do what needs to be done, and so I welcomed this opportunity to promote excellence and diversity. Most of my leadership efforts were invested in supporting students, postdocs and junior faculty of all backgrounds. The reason was simple: my own upbringing was unprivileged and I knew how difficult it is to make it up the academic ladder. I felt committed to helping fledgling scientists achieve success irrespective of where they started. Helping real people required hard work, unlike `virtue signaling’.

To protect their privacy, I cannot mention the dozens of individuals I was fortunate to help during my leadership roles over the years, but my home office is filled with “Thank-You” notes from all of them. The backyard signs of my neighbors serve a different purpose. These offer a shortcut to feeling better.

Unfortunately, `virtue signaling’ also appears in scientific research because of peer pressure. For example, astrobiologists will lobby for the search of bio-signatures on the surface of Mars, but will shy away from promoting an unapologetic disruptive approach in looking for them. None of the past NASA missions to Mars employed a microscope or added a drop of water in-situ to Martian soil in order to check for any signs of dormant life that might be awaken. The adopted approaches provided a safer path for avoiding controversy, such as the claim by former NASA engineer Gilbert V. Levine who served as the principal investigator Labeled Release experiment on NASA Viking missions to Mars, and explicitly argued in a Scientific American essay in 2019 that he is convinced we already found life on Mars in the 1970s.

Similarly, astrobiologists plan to invest billions of dollars in the search for primitive life in exoplanet atmosphere over the coming decades, but do not allocate even a percent of these funds to the search for intelligent life. To avoid controversy, they regard techno-signatures as risky relative to bio-signatures even though the one biosphere we know, here on Earth, has both.

The pattern repeats farther down. SETI scientists who searched for radio signals unsuccessfully for seven decades mention peripherally the search for technological objects near Earth as an alternative. However, when it comes to analyzing actual data on the anomalous geometry and non-gravitational acceleration of the first reported interstellar object `Oumuamua or the high material strength of the first two interstellar meteors, they join forces with the conservative mainstream of astrobiology and dismiss upfront a possible technological origin without engaging in any further research. The Galileo Project aims to repair this attitude by following the scientific method and seeking new data on anomalous objects near Earth.

In another context, fundamental physics aims to explain reality, yet the mainstream of theoretical physics was engaged for four decades in developing abstract concepts of string theory and the multiverse with no experimental sanity checks. In this community, `virtue signaling’ is to argue that engaging with real experimental data is an option for a physicist, akin to the proposal that the job description of a plumber could include the option of fixing plumbing issues in the Metaverse for the community of subscribers who put Metaverse goggles on their head.

Scientific `virtue signaling’ admits loyalty to the mainstream while whispering — but not pursuing — disruptive innovation, in order to avoid controversy. It offers an easy path of least resistance for scientists to remain popular within the groupthink. It avoids the hard work required to improve on what we know. Herd mentality sometimes masquerades as `open-mindedness’ when it lacks action to change the world.

Artificial intelligence (AI) systems like GPT-4 are trained to imitate humans. As such, they mirror society and are already showing biases and discrimination against various groups of people. By reflecting our image, AI provides a reality check as to the limited effectiveness of `virtue signaling’. Here’s hoping that AI mirrors will bring awareness to the discrepancy between our wishful thinking and the reality surrounding us, so as to trigger action.

The unfortunate nature of `virtue signaling’ is that it does not represent a sincere attempt to repair the world. On occasion, it can lead to the opposite outcome by pushing back against individuals who are actually engaged in an honest effort to promote a change, because they upset the status-quo and create controversy. These individuals are not as popular as `virtue signaling’ advocates. But they carry the actual virtues that others are signaling.

ABOUT THE AUTHOR

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, is scheduled for publication in August 2023.

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

PUBLISHED
March 22, 2023

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.