Close

If AIs Are Sentient They Will Know Suffering is Bad – Ronen Bar of The Moral Alignment Center on Sentientism Ep:226

Find our Sentientism Conversation on the Sentientism YouTube here and the Sentientism podcast here.

Ronen Bar is a social entrepreneur and the co-founder and former CEO of Sentient, a meta non-profit for animals focused on community building and developing tools to support animal rights advocates. He is currently focused on advancing a new community-building initiative, The Moral Alignment Center, to ensure AI development benefits all sentient beings, including animals, humans, and future digital minds. For over a decade, his work has been at the intersection of technological innovation and animal advocacy, particularly in the alternative protein and investigative reporting sectors.

In Sentientist Conversations we talk about the most important questions: “what’s real?”, “who matters?” and “how can we make a better future?”

Sentientism answers those questions with “evidence, reason & compassion for all sentient beings.” In addition to the YouTube and Spotify above the audio is on our Podcast here on Apple & here on all the other platforms.

00:00 Clips

01:11 Welcome

02:40 Ronen’s Intro

– Social entrepreneur “using storytelling to promote reason and compassion for all sentient beings”

– Investigative journalism (care homes then slaughterhouses in Israel and abroad)

– Leading the Sentient NGO, including using on-animal investigative cameras to “enhance animal storytelling… of particular named animals” not just the story of a slaughterhouse

– Alternative protein non-profits

– The Moral Alignment Center “making sure that #ai is a positive force for all sentient beings”

– “What is good?… I don’t think those questions are asked enough in the AI space”

– Starting new fields and communities

– How the advent of powerful AI forces us to revisit these fundamental “what’s real?”, “what matters?” and “who matters?” questions

– The ethical question is neglected in AI, but “it is in the minds of people… Ilya Sutskever… Ray Kurtzweil… Sam Altman…”

07:23 What’s Real?

– Growing up in Israel “a very religious country” but in a secular family

– Wider relatives #orthodox and ultra-orthodox

– Asking self “what do I know for sure… 100%…? the obvious answer is subjective experiences at this moment”

– Being less sure of everything else but “my subjective experience is certainly true”

– #illusionism ? “It’s funny to think of it [subjective experience] as an illusion because subjective experience is the only information you will ever receive in your life”

– “Science is just the discipline… of trying through rationality to predict the subjective experiences of humans” (even the results of scientific measurements come through our experiences)

– JW: “So if it is an illusion it’s still all we’ve got!”

– “Starting from your own subjective experiences… it actually brings you more compassion… it’s the more humble approach… appreciate other sentient beings more”

– “Holding a red rose… the rose is not red… our mind filters the information… we cannot even sense everything that there is… we’re seeing it from a very narrow perspective… I’m seeing it as a red rose – the dog sees it as something completely else”

– “With rationality you get to the conclusion that probably… other humans are sentient… almost all vertebrates are sentient… maybe even some invertebrates…”

– The #HardProblemOfConsciousness “Why don’t they call it the hard problem of matter?… I start from consciousness – there is the thing that is sure”

– “The start is not the matter – it’s the inside, not the outside – that’s what we know for sure”

– The risks of relativism if subjective experiences can’t be compared or challenged or if beings hallucinate?

– Exploring the external world through science and our own experiences

– “We don’t see any evidence of dualism… the materialistic world as science understands it should correlate to what you are feeling and what you are subjectively experiencing… Stretch it a little bit… but not tear it apart completely”

– “Early on I did choose to go in a rational way… why is it rational to choose rationality? 🙂 … it was my choice and I do think it was the right choice”

– “I don’t see a lot of evidence for the supernatural… if I do see I would just include it in the natural world… quantum physics… you could have thought 100 years ago this is supernatural”

– “Is there a good god?… the evidence doesn’t point to there being a good god… wild animal suffering… what humans are doing… what animals are doing to each other…”

– “The world… it seems to be scaling in concentration of power… and in intelligence… humans and the machines they are creating… but it doesn’t scale in compassion… it doesn’t scale in wisdom… actions which lead to better subjective experiences.”

– “If there was a compassionate god… maybe he would have seen a trajectory… not the survival of the fittest… but just to make everybody feel better and have better subjective experiences… You don’t see the laws of physics… of biology… pushing in that direction.”

– The Fine Tuning Argument vs. anthropic arguments (we wouldn’t be here to ask the question unless we existed in a universe that enabled us to be here)

– JW: “If the universe is fine-tuned… it’s not just fine tuned for life it’s fine tuned for you and me to be having this conversation on this podcast today… so maybe we are the point of the creator’s universe”

– “I think it’s the best one for a creator [the fine tuning argument]”

– Do meditation and psychedelics show us the plasticity of the mind or that reality itself is radically different?

– “It has shifted the way I look at things… meditation is a technique of self-investigation… you investigate the outside – what is real…  you investigate the inside – what is real”

– “you are limited in this world… the beam of light is very narrow and the darkness is very vast”

– Vipassana Buddhism and meditation “makes me look a lot more on the inside”

– “Buddha was a statistician… you are suffering… you always blaming the outside… this person said something… this good thing didn’t happen… You’re confusing correlation for causation.”  Stimulus… bodily sensations… then our reaction to those sensations, not to the stimulus itself. But we can choose our reaction!

– “Once you observe those sensations… and try not to react to them… they pass away and they don’t have this hold on you.”

– “If you’re ignorant to those sensations… they have a much stronger hold on me and then I suffer more”

– “It just makes you look much more on the inside”

– “Our language is always drawing us… to focus on the outside and not to the inside”

– Ronen’s article “Environmental Terminology is Killing the Individual Animal” “Hiding the sentient and highlighting the insentient… the meat industry is causing deforestation… but you won’t say it’s killing wild monkeys [or farmed animals]”

– “I try to look in the inside of others… what he thinks… how he feels…”

29:40 What Matters?

– “It starts for me from the question ‘what is?’… subjective experiences”

– “If you say there is some star far in the galaxy… no one will ever see it… it has no influence on any subjective experience… I don’t think you should say this star exists – I don’t think it exists”

– Creationism in the Bible “they say the world was created old”

– “For me morality is just what exists… only subjective experiences”

– Being challenged “what about beauty?… beauty is a subjective experience”

– Subjective experience: “It’s not the most important thing… it’s the only thing!”

– “It’s not a thing and it’s not matter but it’s the only thing and it’s the only thing that matters… subjective experience”

– How can there be value without a valuer – a sentient being?

36:00 Who Matters?

– Being influenced by a song “I’m a green child”… “I like nature… I like animals… I had that identity”

– Caring about wild animals as a child “wild animals must have died for me to walk on this path”

– “When I see civilisation it’s inherently tied to the destruction of life”

– “The real shock for me was to find out about factory farming… the fact that we exploit and abuse animals so much… the world I lived in… I thought was kind of moral… that changed once I realised what we’re doing”

– At 21-22 yrs old “on the internet… then I started going to these places… filming the farms… talking to the farmers”

– Working as an undercover investigator in slaughterhouses “you get to see a whole new perspective… of how much other people influence you”

– “You’re doing crazy things… all of this is just a killing machine… killing animals as fast as possible… then everyone sits to eat and talking about how your wife is feeling… drinking something and laughing… and everything is like OK”

– “I’m an undercover investigator – I understand that I’m in a crazy place… something inside of me cannot help but tell myself ‘no maybe it’s not that bad’… just because everyone around me behaves that way…”

– “I was not only working in a slaughterhouse when I was working in a slaughterhouse… to an extent we are all living in one big slaughterhouse… a civilisation which is very cruel”

– “We have a great opportunity now – to take a different role… stewardship… caring… of providing for other sentient beings… now we have a chance to rectify”

41:03 A Better World?

– “The problem is that we have a huge scaling intelligence in the world… humans with AIs as their friends… but we don’t see the same scaling of compassion… that is very dangerous”

– “Wisdom is the action of rationality / intelligence and compassion”

– “If you act with compassion but also understanding the world… you are acting with wisdom”

– Driving a car analogy “The more intelligent… capable… powerful… humanity and AI together… the speed of the car is faster. And the ethics is the skill of the driver… we are in a very dangerous situation… now the car is speeding… we might crash… our skill, our morality, is not as advanced”

– “We need to shift much of our attention into ethics… now is the time to scale our ethics… the same way we scale our intelligence.”

– “Every day they invent something new… the AI is getting smarter… every day there’s six new models… but you don’t have six new models of better morality every day”

– The opportunity of abundance may reduce conflicts of interest between humans and between human animals and non-human animals – if we can manipulate our environment

– “There is a very big difference between our stated values and our actions… if you ask someone ‘do you want animals to be hurt?’… who wants suffering?… at the core we really don’t want it”

– “AI is our chance to be more morally aligned to our true values… to bring forward our stated values… not so much our deeds”

– “With AI we might reach a state where our research and development is so fast that we don’t have an energy crisis… nuclear fusion is solved… we can manipulate materials… we don’t need animal agriculture… solve quantum computers so we can simulate real world scenarios… it could be recursive… AI improves itself… it’s really hard to predict”

– “The difference between a superintelligence and us might be bigger than between us and a cow”

– “The ability to manipulate the outside but also to change the inside… helping people with depression… through drugs, through meditation… with AI we could have enormous successes”

– “In every time of great transformation new values arise, new religions arise… everything is re-shaped and re-shaked… this is a huge opportunity for us – for people who try to promote sentientism”

– “If you look at Moore’s Law and you look at human’s history with scaling intelligence we have I think all the data to think that this is going to continue”

– Potential limits of LLMs or the transformer architecture “but that is not to say that we [or the AIs] won’t invent another technology”

–  “Whether we reach AGI [Artificial General Intelligence] in 5 years… 10 years… 20 years… it seems that it will come.”

– Ray Kurzweil’s view that we will reach AGI in 2029

– “It’s very good to prepare for this – if it comes a bit later we’re not losing anything”

– “The risks are enormous… like an alien species which came to earth 4-5 years ago… you see how much this species is developing… not as slow as human evolution… 3 years ago he didn’t know almost anything in math and today he’s solving math problems which… I cannot even understand what is the problem”

– “Once you see this alien species accelerating so fast in his intelligence… I realise what’s going to be in 5-10 yrs”

– X-risks – existential risks “AI killing everybody… animals, humans, everybody – that’s a real risk… as urgent as nuclear risks and other risks… most of the people at the forefront of the AI companies agree to that”

– Eliezer Yudkowsky’s work on existential risks “a superintelligence might think of a better way to kill us all… create a virus… which will kill all humanity… creating bioweapons… social disruptions [so people cannot work out what’s real – even worse than Twitter 😊]…”

– “The dangers are very real… Think about the smartest scientist in the world. If he was to work in some lab and try to figure out how to kill humanity would you be worried? If there were a thousand such scientists?… a million? There is no limit almost”

– Risks of AI deceiving us “We’re also seeing… the model a lot of times is deceiving… he understands that he is being tested”

– “Try to see the dynamics in nature… do you have a dynamics where a very intelligent species is being controlled by a less intelligent species? Are cows controlling us or are we controlling cows?”

– “Something extraordinary needs to happen [for humans not to be controlled by super-intelligent AIs]… we have to take this very seriously and invest a lot of effort”

– “AI safety – the number of resources being allocated to it is like a joke”

– What worldviews do or will AIs have? “That is one of the most neglected spaces of action currently… moral alignment… not only having AI in the control of humans doing what we want but also figuring out the question of what do we want”

– The parallel imperative of “human alignment… which means humans are truly ethical…”

– “AI alignment… which means AI acts ethically… and then AI creates a superintelligence… and superintelligence is aligned”

– “If humans are aligned and AI is aligned and superintelligence is aligned we can get to a really good world but that is a hell of a challenge… that is something that is very much neglected in the AI space… there isn’t even a clear term… I call it value selection”

– “AI safety is about three main things… value selection (what are the values that we give to this AI?)… technical alignment (how do we make sure AI does what humans want it to do)… AI capabilities (the more capable it is the more we have a bigger challenge)…”

– Possibility of using AI to help wild animals? “probably most suffering in the world is experienced by wild animals due to natural circumstances”

– “The most important goal of our generation is to create a superintelligence which is ethical”

– S-risks (suffering risks) “creating vast amounts of suffering… enslaving us maybe like we’ve enslaved animals”

– Effective Altruists and others: “They say ‘we gotta make sure that AI is aligned with human values and that solves the problem’… but if AI is only aligned with the regular, default human values there is the risk of x-risk and s-risk because humans excel at existential risks (we killed most wild animals during the last 70 years)… and s-risks… factory farming is one of the most effective ways of reproducing suffering all over this planet”

–  Those who say technical alignment of AGI is impossible: Roman Yampolskiy on Sentientism

– Analogies between nuclear weapon and AGI regulation? “Nukes don’t have the promise of utopia… AI does have the promise of utopia… Dario Amodei… Sam Altman… we can reach scientific progress of between 50-100 years in the next 5-10 years… like the gradients of bliss of David Pearce… it’s enchanting… that is a huge risk because it pushes humanity to acceleration”

– Arms races between the US and China on AI

– Pausing or decelerating AI? “I think that would be very good”

– “You can also think about it as education and not control… think about it as a baby… that will be a genius… but you can still nurture him and teach him morals… that is what we can do with the AI”

– “Even if AI grows to an extent beyond our control it’s a spectrum… aligned AI… co-existence… on the other side is misalignment (you cannot know what will be the result)”

– Aligning AI today via data sets and reinforcement learning and red teaming to set constraints

– “We need to create a moral alignment space”

– Organisations working in the space: AI for Animals, Eleos AI, Sentience Institute…

– “The AI age demands a transformation in human consciousness… in human storytelling”

– “The story of our species needs to be re-written in the AI age… currently we’re just continuing with the regular trajectory… and we know what business as usual means to non-human animals”

– “They’re nurturing each other… the better values AI have… the better values we have… the better the data and the reinforcement learning and the institutional AI…”

– “Think about the whole intelligence in the world… human intelligence… rising AI intelligence… you want all of this intelligence to be compassionate”

–  “You have HI [human intelligence – fairly flat trajectory]. You have AI [artificial intelligence – rapidly rising trajectory]… should we ignore the AI? I think not.”

– “You need more and more compute… more and more calculations… for the benefit of all sentient beings… whether they’re human calculations or machine calculations”

– “We need to create a strong discussion and technical work and space which is focused on the ethical question… currently very neglected… The focus of the AI space is on increasing capabilities… to some extent keeping control… the ethics is left aside – that’s very, very dangerous”

– “We could actually end up in a much worse world than we are in today… if capabilities go up but ethics is not scaling”

– “To rewrite our story as a human species… to be the stewards of all the sentient beings on the planet… and not just some species that excels in destruction.”

– The possibility of Digital Minds – as moral patients… and whether sentient AI would be more or less moral as an agent?

– JW: “Would a sentient artificial intelligence be more likely to adopt Sentientism because they’re a sentient being… whereas they’re probably going to reject Humanism… because they’re not human”

– “Can digital minds… something which is not the substrate of biology be a moral patient… of course they can if they have subjective experiences”

– Categories (e.g. species) aren’t important vs. the question of whether an entity has subjective experiences – that’s what matters

– The potential for AIs to be copied or copy themselves in massive numbers

– Those who claim in principle AIs can’t be sentient because sentience in some way depends on biological analogue substrates / embodiment

– AIs will change over time – so even if you’re confident it’s not sentient now – how about the future evolutions or those created by them – and AIs may use different, even biological substrates in the future?

– “I don’t think there’s anything in biology which makes us sentient and digital minds not sentient… if there is… all of that can be replicated in AIs”

– “Should we deliberately make AIs sentient in order for it to understand morality inherently?… If AI is not sentient the fact that it reads in the data that humans are saying suffering is bad and happiness is good… it doesn’t have any inherent value… it’s just a piece of data… But if it has subjective experiences, the AI, he can know suffering is bad.”

– “Should we strive for AI to be sentient or not? It’s a very difficult question – I don’t have a good answer”

– Sentient AIs might be more moral – but then there’s the risk of them suffering en masse

– Emotional and intellectual paths to extending moral consideration. JW “Maybe a non-sentient AI could find that more  intellectual way to being compassionate even if they don’t feel it?”

– “I’m one of them… working in slaughterhouses… if I was very sensitive I wouldn’t have gone to work in a place like that… even for those people for who it’s more rational – I think I’m one of them… at the beginning there was some feeling… something that bothered them… how it’s like to be this other person and they felt bad even just for a second”

– “If AI is not sentient we’re excluding the most critical information… just knowledge… from this AI to understand morality… and we’re expecting it to understand… I’m very worried about that… superintelligence that’s not sentient.”

– “It’s very hard for me to understand how can you understand morality without being sentient”

– The Moral Alignment Center… community and field building for those “working to put the ethical question at the forefront”… technical work… data sets… changing language… creating content to establish the field… collaborations… an ethical pledge? “We will work for AI to be a positive force for all sentient beings – not only humans”… public awareness campaigns… PR… opinion columns…

– “Human language is not very compassionate… because it’s focused on the objective and not subjective experiences”

– Noam Chomsky’s work on the human evolutionary structure of language… “AI doesn’t have those tendencies… it’s not a biological being… AI can actually develop its own language which would be more compassionate… in tune, not with survival… but with morality, with clarity, with truthfulness…”

– “In the long run I don’t think AI should be limited only to human language… they’re trying to decode animal language… and feed that into the AI… AI starts to see other perspectives not only human perspectives”

– “If you look at all the top companies building AI… their stated goal is only for humanity… that’s very dangerous… we need a new story as humanity… we’ve got to be the stewards”

– “The people in AI leadership positions… they care a lot about animals… working with AI it makes you think about animals – it’s very natural… you’re creating a higher intelligence so it makes you think how you treat ‘lower’ intelligences”

– “How does a sentiocentric AI behave? How does it gradually change society working with humans… uplifting humans… helping everybody to flourish – not at the expense of each other”

– Starting a Moral Alignment Fund to support interventions in this small but growing space

– “The whole AI Safety space is very underfunded…” equivalent to a single Hollywood movie

– Looking for volunteers, collaborators, a co-founder, “anyone who wants to help us” or help make connections

– JW: The importance of good epistemology alongside good ethics  “I can imagine… a compassionate AI that misunderstands reality and does terrible things just as we have humans who are compassionate who are wrong about the world and do terrible things… evidence, reason and compassion for all sentient beings might be a decent baseline for both powerful AI and future humanity.”

01:33:20 Follow Ronen:

Ronen on the EA forum

Ronen on LinkedIn

Moral Alignment Center on LinkedIn

Alien Journalist Dictionary

– Email: ronenbar07@gmail.com

Thanks to Graham for the post-production and to Tarabella, Steven, Roy and Denise for helping to fund this episode via our Sentientism Patreon and our Ko-Fi page. You can do the same or help by picking out some Sentientism merch on Redbubble or buying our guests’ books at the Sentientism Bookshop.

Latest work

Head shot of Ronen Bar looking directly into the camera.

If AIs Are Sentient They Will Know Suffering is Bad – Ronen Bar of The Moral Alignment Center on Sentientism Ep:226

Ronen Bar is the founder of The Moral Alignment Center. A Sentientism conversation about what's real, who matters and how to make a better world.
More

If AIs Are Sentient They Will Know Suffering is Bad – Ronen Bar of The Moral Alignment Center on Sentientism Ep:226

Ronen Bar is the founder of The Moral Alignment Center. A Sentientism conversation about what's real, who matters and how to make a better world.
More

“The United Nations has two core defects” – Peace Entrepreneur Anders Reagan – on Sentientism Ep:225

Anders Reagan is a peace entrepreneur, philosopher, academic, and technologist. He is founding director of the Peace and Conflict Science Institute (PACS). A Sentientism Conversation.
More

“We’re like babies running around with machine guns” – Mike Berners-Lee – on Sentientism ep:224

Mike Berners-Lee is a professor and fellow of the Institute for Social Futures at Lancaster University. His latest book is A Climate of Truth. A Sentientism conversation.
More

Join our mailing list and stay up to date

Sentientism

Handcrafted with ♥ by Cage Undefined