Posts tagged with "ai"

Navigating the Digital Frontier: How the US-Israel AI Agreement Shapes American Culture

The landscape of American innovation is constantly evolving, and a recent development on the global stage has significant implications for how we live, work, and interact. On July 8, 2025, in Washington D.C., the United States and Israel formally solidified a new memorandum of understanding (MoU) on energy and artificial intelligence. This agreement, designed to bolster joint research, innovation, and AI-powered energy projects, is more than just a diplomatic handshake; it’s a foundational step that will resonate deeply within the fabric of American culture. 

The Need-to-Knows: What is This Agreement All About?

At its core, this MoU aims to advance cooperation in applying Artificial Intelligence (AI) to strengthen the energy grids in both Israel and the U.S. It also encourages broader research, innovation, and the development of joint policies in this rapidly expanding field. Key figures like Israeli Prime Prime Minister Benjamin Netanyahu, U.S. Interior Secretary Doug Burgum, and U.S. Energy Secretary Chris Wright were present at the signing, emphasizing the high-level commitment to this partnership.

Beyond energy, the agreement extends to promoting regional projects, including the ambitious India–Middle East–Europe Economic Corridor (IMEC) and the ongoing Abraham Accords. Crucially, it will establish a working group dedicated to cooperation on standards and optimal practices, as well as developing safe digital infrastructure for integrating AI into our energy economies. As ICC accused war criminal, Prime Minister Netanyahu noted, “AI is the thrust of innovation now and will create unbelievable things in the future. It’s both challenging, because there could be bad things in it, but there could be unimaginable good things.”

Takeaways for American Culture: Progress and Peril

This agreement signals a deepening commitment to AI integration at a national and international level, promising advancements in energy efficiency and technological leadership. For American culture, this could mean a future powered by smarter, more resilient infrastructure, leading to economic growth and enhanced national security. The collaborative spirit also aligns with American ideals of innovation and global partnership, potentially opening new avenues for scientific breakthroughs and shared prosperity.

However, as with any powerful technology, the rapid proliferation of AI, particularly in critical infrastructure, brings forth significant considerations, most notably regarding privacy. The very nature of AI relies on data—often vast amounts of personal and societal data—to learn and operate. While the agreement mentions “safe digital infrastructure,” the underlying question for Americans remains: how will this data be protected, and what mechanisms will be in place to prevent its misuse? Considering Israel’s track record of dishonesty, crimes against humanity, and blatant violation of all international laws and norms, especially in its active genocide of the Palestinian people, this move by the Trump administration is another gut punch to a weary and disillusioned American public. 

Implications for American Ideals: A Balancing Act

America was founded on principles of individual liberty, freedom, and the pursuit of happiness. These ideals are deeply intertwined with the concept of privacy and the protection of personal autonomy. As AI becomes more embedded in our daily lives, particularly within our energy systems, the potential for extensive data tracking and analysis grows exponentially.

This new trajectory in data tracking and AI presents a critical juncture for American culture. Does this advancement genuinely help Americans by providing greater efficiency, security, and innovation, or does it inadvertently hurt by chipping away at foundational privacy rights and potentially leading to unforeseen vulnerabilities?

The promise of AI to enhance our lives is undeniable. Smarter energy grids could lead to lower costs and more reliable power. Advanced AI could drive medical breakthroughs and improve public services. Yet, the lessons from past technological shifts, such as the UK’s Post Office scandal where a faulty computer system led to wrongful prosecutions and immense human suffering, serve as a stark reminder of the critical need for vigilance, transparency, and accountability in the deployment of powerful technologies.

A Historic Note: Lessons from the London Post Office Scandal

To underscore the potential risks associated with relying on complex technological systems and the critical importance of accurate data, it’s worth reflecting on the recent tragedy in London involving the Post Office. In this devastating miscarriage of justice, a faulty computer system, known as Horizon, led to the wrongful prosecution and conviction of hundreds of innocent postal employees for theft and fraud. Based on incorrect data generated by the system, these individuals faced severe consequences, including imprisonment, bankruptcy, and immense personal suffering. Tragically, reports indicate that at least 13 people took their own lives as a result of the scandal, with many more contemplating suicide.

Comparing this historical event with the US-Israel AI agreement highlights both differences and chilling similarities. The most significant difference lies in the nature of the technology and its intended application. The Post Office scandal involved a flawed accounting system, while the US-Israel agreement focuses on leveraging AI for energy and broader innovation. The intent of the US-Israel agreement is to enhance efficiency and security, not to track individual transactions in a way that could lead to false accusations, (given the occupying nation’s history of blatant war crimes, corruption, sabotage (exploding pagers) and dishonesty, how can any country trust the words written in an MoU with Israel?).

However, the similarity lies in the potential for catastrophic consequences when complex technological systems mishandle data and when that misinformation is used to make decisions that impact innocent people. The Post Office scandal serves as a stark warning about the dangers of blind faith in technology and the critical need for human oversight, transparency, and robust mechanisms to challenge and correct erroneous data.

Does the US-Israel AI agreement pose a similar threat to the public as it pertains to mishandling data and using misinformation to prosecute or punish innocent people? While the agreement emphasizes “safe digital infrastructure” and cooperation on standards and optimal practices, the potential for unforeseen vulnerabilities and the misuse of data in complex AI systems cannot be dismissed. The scale and interconnectedness of AI in critical infrastructure like energy grids mean that errors or malicious actions could have far-reaching consequences. And Israel cannot be trusted. 

The key takeaway from the Post Office scandal in the context of the US-Israel AI agreement is the absolute necessity of proactive measures to prevent data mishandling, ensure the accuracy and integrity of AI systems, and establish clear lines of accountability. Without these safeguards, the potential for a different kind of “Horizon” scandal, one rooted in the complexities of AI and its application in critical sectors, remains a tangible threat.

As American culture embraces this new AI frontier, it is imperative that we, as citizens, engage in thoughtful dialogue about the ethical implications, demand robust privacy safeguards, and ensure that the pursuit of progress never comes at the cost of our enduring principles. The question isn’t whether AI will shape our future, but rather, how we will shape AI to ensure it serves the best interests of all Americans, upholding the very ideals our nation was founded upon. 

AI, Copyright, and Culture: Who’s in Charge at the US Copyright Office and What It Means for America

American culture is a dynamic tapestry woven with creativity, innovation, and a robust framework of intellectual property laws designed to protect it. At the heart of this framework lies the US Copyright Office, an institution often described as “sleepy” but now at the epicenter of unprecedented turmoil. As artificial intelligence (AI) rapidly reshapes creative industries, a leadership vacuum at the Copyright Office has ignited a constitutional debate with profound implications for creators and the very fabric of American society.

Need to Know: A Governing Structure in Flux

The current upheaval at the US Copyright Office stems from the abrupt, email-based dismissal of Register of Copyrights Shira Perlmutter by the White House’s deputy director of personnel. This dismissal followed a similar ouster of Librarian of Congress Carla Hayden, to whom the Register reports. Perlmutter is now suing the Trump administration, asserting that her firing was invalid and that she remains the rightful Register. Meanwhile, the White House has appointed new individuals to these roles, including former Trump defense attorney Todd Blanche as acting Librarian of Congress, who then attempted to appoint a new acting Register.

The core of the dispute lies in the authority to appoint and dismiss these critical roles. Perlmutter and some members of Congress argue that only the Librarian of Congress can fire the Register, and that the President lacks the authority to appoint the Librarian of Congress in this manner. The government, however, maintains the executive branch’s power to dismiss and appoint.

The practical impact of this legal and political battle is significant: the US Copyright Office is effectively without a clear, undisputed leader. New appointments have not physically shown up for work, leaving the office in an unprecedented state of limbo.

Key Takeaways: Uncharted Waters for Copyright and Creativity

  • Leadership Vacuum and Legal Uncertainty: The lack of a clear, functioning Register of Copyrights creates significant uncertainty. Critical duties, such as advising Congress on copyright matters, are being delayed or stalled.
  • Validity of Copyright Registrations Questioned: Perhaps the most immediate and concerning issue is the validity of new copyright registration certificates. The Copyright Office temporarily paused issuing them and has since resumed, but with a blank space where the Register’s signature would normally be. Copyright experts are debating whether these unsigned certificates could be vulnerable to legal challenges in litigation, potentially undermining the very protections they are meant to provide.
  • Impact on Copyright Claims Board and Royalties: The absence of a clear leader also affects the Copyright Claims Board, a tribunal for resolving disputes, as a board member needs to be replaced. Furthermore, the recertification of the Mechanical Licensing Collective (MLC), which administers royalties for streaming music, is currently stalled, though its immediate operations may not be impacted due to the lack of a specific legal deadline for recertification.
  • AI Copyright Guidance in Limbo: The turbulence comes at a particularly sensitive time, with dozens of economy-shaking AI copyright lawsuits winding through the courts. Just days before her dismissal, Perlmutter’s office released a hotly contested, prepublication report on generative AI training and fair use—a report now being cited in major lawsuits. The lack of a stable leadership means that crucial finalized guidance on AI and copyright, vital for creators and tech companies alike, remains in limbo.

Implications for American Culture: Governing Structure Under Strain

The current situation at the US Copyright Office is more than just an internal personnel dispute; it’s a telling moment for American governing structure and its impact on the cultural landscape.

Our system of checks and balances and the established processes for appointing leadership in critical governmental bodies are designed to ensure stability, expertise, and continuity. When these processes are challenged or circumvented, it creates ripples that can affect various aspects of society. In this instance, the dispute highlights potential vulnerabilities in how our executive branch interacts with independent agencies and institutions, especially those vital for protecting intellectual property.

For American culture, this means:

  • Uncertainty for Creators: Artists, writers, musicians, and other creators rely on copyright law to protect their work and livelihoods. The current legal ambiguity surrounding registrations and the lack of clear leadership can sow distrust and hesitation, potentially stifling creative output at a time when AI is already challenging traditional notions of authorship.
  • Delayed Adaptation to New Technologies: AI’s rapid development necessitates swift and clear guidance from copyright authorities. A leadership void means the US Copyright Office is less equipped to provide the necessary frameworks and interpretations, leaving creators and innovators to navigate complex legal territory without a compass. This can impede technological progress and the integration of AI into creative processes in a way that respects existing rights.
  • A Test of Institutional Resilience: The very ability of a “sleepy” yet crucial institution like the Copyright Office to withstand political pressure and maintain its functions is being tested. The outcome of this leadership dispute will set precedents for how similar governmental bodies are managed and how effectively they can uphold their statutory obligations in the face of executive branch actions.
  • Shaping the Future of American Intellectual Property: The legal battles over AI and copyright, combined with the leadership vacuum at the Copyright Office, are actively shaping the future of intellectual property in the United States. The resolutions—whether through court decisions, legislative action, or a clear establishment of leadership—will determine how American culture values and protects its creative output in the digital age.

The saga at the US Copyright Office is a powerful reminder that the seemingly mundane aspects of our governing structure have profound and far-reaching impacts on the vibrancy and health of American culture. As the legal and political debates continue, all eyes are on Washington to see how this crucial chapter in the story of American copyright will conclude, and what it will ultimately mean for the creators and innovators who enrich our society

On the Virtue of Real Action in Place of `Virtue Signaling’

Credit: TIME

When jogging through my neighborhood at sunrise, I often see backyard signs pledging allegiance to a sacred political principle which my neighbors hold dear. The backyard signs communicate what the neighbors want others to think that they care about. However, these signs do little to promote in practice the cause they highlight. The signs are posted because they represent a popular opinion within the community. They would not be posted in a community with a different set of values, to avoid the risk of controversy. Ironically, it is the other community that needs convincing, and where the sign would serve the purpose of engaging in a dialogue to improve the world.

A 2020 Morning Consult poll found that a quarter of adults without children say climate change is part of the reason they didn’t have children. Given the rest of our industrial activities, their choice has little impact on suppressing climate change, akin to the impact of becoming vegan on saving endangered species. But these decisions make people feel and look better within their like-minded communities.

Later in my day, I see many of my colleagues on the academic campus using popular slogans to express their loyalty to trendy principles. The spectacle reminds me of the uniform we used to wear at elementary school to hide our actual socioeconomic backgrounds. This is all good, except that when it comes to the hard work necessary for fulfilling these same principles by actually helping real people, the same colleagues are nowhere to be found.

What is the virtue inherent in `virtue signaling’? Clearly, it is the pleasure of communicating the beauty of ideas that aim to repair a broken world. But without turning them into action, the beautiful ideas resemble an engine that lacks transmission. A car’s transmission is essential for turning the engine’s power into motion on the road. The engine by itself only makes noise.

Why is it then that action is rare? Obviously, because it requires hard work as well as coming up with an effective implementation strategy on how to make a difference.

Over the past decade I had the privilege of serving simultaneously as director of the Institute for Theory and Computation, chair of the Astronomy department and founding director of the Black Hole Initiative at Harvard University. The reason I agreed to serve on all three leadership roles as once, was to improve my environment. They demanded sacrifice of my precious research time. Those who know me would testify that there is nothing more enjoyable for me than being fully immersed in creative scientific work, of which administrative distractions are the foe. But at some phase in my career, I realized that I cannot rely on others to do what needs to be done, and so I welcomed this opportunity to promote excellence and diversity. Most of my leadership efforts were invested in supporting students, postdocs and junior faculty of all backgrounds. The reason was simple: my own upbringing was unprivileged and I knew how difficult it is to make it up the academic ladder. I felt committed to helping fledgling scientists achieve success irrespective of where they started. Helping real people required hard work, unlike `virtue signaling’.

To protect their privacy, I cannot mention the dozens of individuals I was fortunate to help during my leadership roles over the years, but my home office is filled with “Thank-You” notes from all of them. The backyard signs of my neighbors serve a different purpose. These offer a shortcut to feeling better.

Unfortunately, `virtue signaling’ also appears in scientific research because of peer pressure. For example, astrobiologists will lobby for the search of bio-signatures on the surface of Mars, but will shy away from promoting an unapologetic disruptive approach in looking for them. None of the past NASA missions to Mars employed a microscope or added a drop of water in-situ to Martian soil in order to check for any signs of dormant life that might be awaken. The adopted approaches provided a safer path for avoiding controversy, such as the claim by former NASA engineer Gilbert V. Levine who served as the principal investigator Labeled Release experiment on NASA Viking missions to Mars, and explicitly argued in a Scientific American essay in 2019 that he is convinced we already found life on Mars in the 1970s.

Similarly, astrobiologists plan to invest billions of dollars in the search for primitive life in exoplanet atmosphere over the coming decades, but do not allocate even a percent of these funds to the search for intelligent life. To avoid controversy, they regard techno-signatures as risky relative to bio-signatures even though the one biosphere we know, here on Earth, has both.

The pattern repeats farther down. SETI scientists who searched for radio signals unsuccessfully for seven decades mention peripherally the search for technological objects near Earth as an alternative. However, when it comes to analyzing actual data on the anomalous geometry and non-gravitational acceleration of the first reported interstellar object `Oumuamua or the high material strength of the first two interstellar meteors, they join forces with the conservative mainstream of astrobiology and dismiss upfront a possible technological origin without engaging in any further research. The Galileo Project aims to repair this attitude by following the scientific method and seeking new data on anomalous objects near Earth.

In another context, fundamental physics aims to explain reality, yet the mainstream of theoretical physics was engaged for four decades in developing abstract concepts of string theory and the multiverse with no experimental sanity checks. In this community, `virtue signaling’ is to argue that engaging with real experimental data is an option for a physicist, akin to the proposal that the job description of a plumber could include the option of fixing plumbing issues in the Metaverse for the community of subscribers who put Metaverse goggles on their head.

Scientific `virtue signaling’ admits loyalty to the mainstream while whispering — but not pursuing — disruptive innovation, in order to avoid controversy. It offers an easy path of least resistance for scientists to remain popular within the groupthink. It avoids the hard work required to improve on what we know. Herd mentality sometimes masquerades as `open-mindedness’ when it lacks action to change the world.

Artificial intelligence (AI) systems like GPT-4 are trained to imitate humans. As such, they mirror society and are already showing biases and discrimination against various groups of people. By reflecting our image, AI provides a reality check as to the limited effectiveness of `virtue signaling’. Here’s hoping that AI mirrors will bring awareness to the discrepancy between our wishful thinking and the reality surrounding us, so as to trigger action.

The unfortunate nature of `virtue signaling’ is that it does not represent a sincere attempt to repair the world. On occasion, it can lead to the opposite outcome by pushing back against individuals who are actually engaged in an honest effort to promote a change, because they upset the status-quo and create controversy. These individuals are not as popular as `virtue signaling’ advocates. But they carry the actual virtues that others are signaling.

ABOUT THE AUTHOR

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, is scheduled for publication in August 2023.

Trump’s Indictment And The Future Of The Republican Party

ePa Live Guest:

Raynard Jackson, a Republican political consultant, lobbyist, and radio host who has served on the presidential campaigns of George H. W. Bush and George W. Bush. Jackson is a native of St. Louis, MO, and is one of the most sought-after conservative speakers in America. He is a frequent public speaker to college students, political & business groups and churches. Jackson has worked on numerous Republican U.S. Senate, gubernatorial, and congressional political campaigns.

He is the president and CEO of Raynard Jackson & Associates, a lobbying firm based in Washington, D.C.  He is a staunch supporter of former President Donald J. Trump and has criticized his critics, including liberal political pundits Joy Reid and Don Lemon, claiming they have done more to hurt Black people than Trump.

Raynard joined ePa Live to discuss the ramifications of the indictment of Trump and gave his predictions about the next presidential election.

Raynard answers ePa Live question of the day:

Raynard Jackson on the ramifications of indicting a former sitting U.S. president:

Raynard Jackson on Tennessee’s House of Representatives expelling two Democratic lawmakers for leading gun control demonstrations from the House floor. Republicans accused the three Democratic lawmakers of bringing “disorder and dishonour to the House”:

Raynard Jackson discusses the 2023 Wisconsin Supreme Court election held on Tuesday, April 4, 2023, to elect a justice to the Wisconsin Supreme Court for a ten-year term. Janet Protasiewicz prevailed in the state’s highly consequential contest for the Supreme Court, which will now be likely to reverse the state’s abortion ban and end the use of gerrymandered legislative maps:

The 2024 presidential election is already shaping up to be one of the most heated political races in American history. Raynard Jackson, Republican political consultant, lobbyist, and radio host offers his predictions on ePa Live:

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

PUBLISHED
March 22, 2023

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.

ePa Live_02252023

ePa Live: Racism & Technology In The Age of AI, Cultural Theft & Social Devaluation

ePa Live Guest:

  • Dr. Niyana “KoKo” Rasayon, MA., PhD., LPCC, Behavioral Neuroscientist; Associate Professor, University of the District of Columbia

Dr. Rasayon has authored two books that build on social neuroscience, “Reality Check: A Manual for the Hue-man Octahedron & The Mystery of Melanin, and The Awakening: OMG The President is Black”. His Master’s thesis examined the psychological characteristics of vegetarians & non-vegetarians. He is a Board-Certified Fellow & Diplomate in Afrikan Centered-Black Psychology. Dr. Rasayon has taught psychology for 16 years, three of which included courses in the U S Pentagon. Dr. Rasayon also completed the first EEG (brain waves) study on culture and learning styles among Afrikan-Amerikan males at Howard University. His work, programs and books can be found at: www.eyesofmaat.com.

This Saturday we will discuss his work, the impact of technology on the brain, healthy ways to co-exist with technology and why Black people are disproportionately and negatively impacted by algorithms and facial technology.  Join the conversation, like, share and subscribe! If you missed it, no worries, check it out below.