Posts tagged with "ai"

On the Virtue of Real Action in Place of `Virtue Signaling’

Credit: TIME

When jogging through my neighborhood at sunrise, I often see backyard signs pledging allegiance to a sacred political principle which my neighbors hold dear. The backyard signs communicate what the neighbors want others to think that they care about. However, these signs do little to promote in practice the cause they highlight. The signs are posted because they represent a popular opinion within the community. They would not be posted in a community with a different set of values, to avoid the risk of controversy. Ironically, it is the other community that needs convincing, and where the sign would serve the purpose of engaging in a dialogue to improve the world.

A 2020 Morning Consult poll found that a quarter of adults without children say climate change is part of the reason they didn’t have children. Given the rest of our industrial activities, their choice has little impact on suppressing climate change, akin to the impact of becoming vegan on saving endangered species. But these decisions make people feel and look better within their like-minded communities.

Later in my day, I see many of my colleagues on the academic campus using popular slogans to express their loyalty to trendy principles. The spectacle reminds me of the uniform we used to wear at elementary school to hide our actual socioeconomic backgrounds. This is all good, except that when it comes to the hard work necessary for fulfilling these same principles by actually helping real people, the same colleagues are nowhere to be found.

What is the virtue inherent in `virtue signaling’? Clearly, it is the pleasure of communicating the beauty of ideas that aim to repair a broken world. But without turning them into action, the beautiful ideas resemble an engine that lacks transmission. A car’s transmission is essential for turning the engine’s power into motion on the road. The engine by itself only makes noise.

Why is it then that action is rare? Obviously, because it requires hard work as well as coming up with an effective implementation strategy on how to make a difference.

Over the past decade I had the privilege of serving simultaneously as director of the Institute for Theory and Computation, chair of the Astronomy department and founding director of the Black Hole Initiative at Harvard University. The reason I agreed to serve on all three leadership roles as once, was to improve my environment. They demanded sacrifice of my precious research time. Those who know me would testify that there is nothing more enjoyable for me than being fully immersed in creative scientific work, of which administrative distractions are the foe. But at some phase in my career, I realized that I cannot rely on others to do what needs to be done, and so I welcomed this opportunity to promote excellence and diversity. Most of my leadership efforts were invested in supporting students, postdocs and junior faculty of all backgrounds. The reason was simple: my own upbringing was unprivileged and I knew how difficult it is to make it up the academic ladder. I felt committed to helping fledgling scientists achieve success irrespective of where they started. Helping real people required hard work, unlike `virtue signaling’.

To protect their privacy, I cannot mention the dozens of individuals I was fortunate to help during my leadership roles over the years, but my home office is filled with “Thank-You” notes from all of them. The backyard signs of my neighbors serve a different purpose. These offer a shortcut to feeling better.

Unfortunately, `virtue signaling’ also appears in scientific research because of peer pressure. For example, astrobiologists will lobby for the search of bio-signatures on the surface of Mars, but will shy away from promoting an unapologetic disruptive approach in looking for them. None of the past NASA missions to Mars employed a microscope or added a drop of water in-situ to Martian soil in order to check for any signs of dormant life that might be awaken. The adopted approaches provided a safer path for avoiding controversy, such as the claim by former NASA engineer Gilbert V. Levine who served as the principal investigator Labeled Release experiment on NASA Viking missions to Mars, and explicitly argued in a Scientific American essay in 2019 that he is convinced we already found life on Mars in the 1970s.

Similarly, astrobiologists plan to invest billions of dollars in the search for primitive life in exoplanet atmosphere over the coming decades, but do not allocate even a percent of these funds to the search for intelligent life. To avoid controversy, they regard techno-signatures as risky relative to bio-signatures even though the one biosphere we know, here on Earth, has both.

The pattern repeats farther down. SETI scientists who searched for radio signals unsuccessfully for seven decades mention peripherally the search for technological objects near Earth as an alternative. However, when it comes to analyzing actual data on the anomalous geometry and non-gravitational acceleration of the first reported interstellar object `Oumuamua or the high material strength of the first two interstellar meteors, they join forces with the conservative mainstream of astrobiology and dismiss upfront a possible technological origin without engaging in any further research. The Galileo Project aims to repair this attitude by following the scientific method and seeking new data on anomalous objects near Earth.

In another context, fundamental physics aims to explain reality, yet the mainstream of theoretical physics was engaged for four decades in developing abstract concepts of string theory and the multiverse with no experimental sanity checks. In this community, `virtue signaling’ is to argue that engaging with real experimental data is an option for a physicist, akin to the proposal that the job description of a plumber could include the option of fixing plumbing issues in the Metaverse for the community of subscribers who put Metaverse goggles on their head.

Scientific `virtue signaling’ admits loyalty to the mainstream while whispering — but not pursuing — disruptive innovation, in order to avoid controversy. It offers an easy path of least resistance for scientists to remain popular within the groupthink. It avoids the hard work required to improve on what we know. Herd mentality sometimes masquerades as `open-mindedness’ when it lacks action to change the world.

Artificial intelligence (AI) systems like GPT-4 are trained to imitate humans. As such, they mirror society and are already showing biases and discrimination against various groups of people. By reflecting our image, AI provides a reality check as to the limited effectiveness of `virtue signaling’. Here’s hoping that AI mirrors will bring awareness to the discrepancy between our wishful thinking and the reality surrounding us, so as to trigger action.

The unfortunate nature of `virtue signaling’ is that it does not represent a sincere attempt to repair the world. On occasion, it can lead to the opposite outcome by pushing back against individuals who are actually engaged in an honest effort to promote a change, because they upset the status-quo and create controversy. These individuals are not as popular as `virtue signaling’ advocates. But they carry the actual virtues that others are signaling.

ABOUT THE AUTHOR

Avi Loeb is the head of the Galileo Project, founding director of Harvard University’s — Black Hole Initiative, director of the Institute for Theory and Computation at the Harvard-Smithsonian Center for Astrophysics, and the former chair of the astronomy department at Harvard University (2011–2020). He chairs the advisory board for the Breakthrough Starshot project, and is a former member of the President’s Council of Advisors on Science and Technology and a former chair of the Board on Physics and Astronomy of the National Academies. He is the bestselling author of “Extraterrestrial: The First Sign of Intelligent Life Beyond Earth” and a co-author of the textbook “Life in the Cosmos”, both published in 2021. His new book, titled “Interstellar”, is scheduled for publication in August 2023.

Trump’s Indictment And The Future Of The Republican Party

ePa Live Guest:

Raynard Jackson, a Republican political consultant, lobbyist, and radio host who has served on the presidential campaigns of George H. W. Bush and George W. Bush. Jackson is a native of St. Louis, MO, and is one of the most sought-after conservative speakers in America. He is a frequent public speaker to college students, political & business groups and churches. Jackson has worked on numerous Republican U.S. Senate, gubernatorial, and congressional political campaigns.

He is the president and CEO of Raynard Jackson & Associates, a lobbying firm based in Washington, D.C.  He is a staunch supporter of former President Donald J. Trump and has criticized his critics, including liberal political pundits Joy Reid and Don Lemon, claiming they have done more to hurt Black people than Trump.

Raynard joined ePa Live to discuss the ramifications of the indictment of Trump and gave his predictions about the next presidential election.

Raynard answers ePa Live question of the day:

Raynard Jackson on the ramifications of indicting a former sitting U.S. president:

Raynard Jackson on Tennessee’s House of Representatives expelling two Democratic lawmakers for leading gun control demonstrations from the House floor. Republicans accused the three Democratic lawmakers of bringing “disorder and dishonour to the House”:

Raynard Jackson discusses the 2023 Wisconsin Supreme Court election held on Tuesday, April 4, 2023, to elect a justice to the Wisconsin Supreme Court for a ten-year term. Janet Protasiewicz prevailed in the state’s highly consequential contest for the Supreme Court, which will now be likely to reverse the state’s abortion ban and end the use of gerrymandered legislative maps:

The 2024 presidential election is already shaping up to be one of the most heated political races in American history. Raynard Jackson, Republican political consultant, lobbyist, and radio host offers his predictions on ePa Live:

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

PUBLISHED
March 22, 2023

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5]  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.

We have prepared some FAQs in response to questions and discussion in the media and elsewhere. You can find them here.

ePa Live_02252023

ePa Live: Racism & Technology In The Age of AI, Cultural Theft & Social Devaluation

ePa Live Guest:

  • Dr. Niyana “KoKo” Rasayon, MA., PhD., LPCC, Behavioral Neuroscientist; Associate Professor, University of the District of Columbia

Dr. Rasayon has authored two books that build on social neuroscience, “Reality Check: A Manual for the Hue-man Octahedron & The Mystery of Melanin, and The Awakening: OMG The President is Black”. His Master’s thesis examined the psychological characteristics of vegetarians & non-vegetarians. He is a Board-Certified Fellow & Diplomate in Afrikan Centered-Black Psychology. Dr. Rasayon has taught psychology for 16 years, three of which included courses in the U S Pentagon. Dr. Rasayon also completed the first EEG (brain waves) study on culture and learning styles among Afrikan-Amerikan males at Howard University. His work, programs and books can be found at: www.eyeofmaat.com

This Saturday we will discuss his work, the impact of technology on the brain, healthy ways to co-exist with technology and why Black people are disproportionately and negatively impacted by algorithms and facial technology.  Join the conversation, like, share and subscribe! If you missed it, no worries, check it out below.