AI and HR: the clue's in the name (Newsweek Interview) by Robert Smith

AI will impact us in many areas, and HR (human resources) is a big one. The clue is in the name: this is about humans (although I must admit I don’t like the idea of people being treated as mere resources). Anyway, I’ve been working with Kevin Butler, an expert on HR software, in a company called Centigy, trying to provide means for AI to be certified to treat people in fair and unbiased ways pursue and progress in their jobs. This will be really important, particularly since regulatory bodies are beginning to pay attention, with HR being identified as one of 5 “high risk” areas for the application of AI in the EU. This will affect AI companies everywhere since the regs will apply to anyone who wants to do business in Europe.

Newsweek picked up on what Kevin and I are trying to do at Centigy and did a really great interview that came out today. Please have a read; it’s really informative about something that I think is really important.

New Paper: dispelling some AI Hype for the Defence Community by Robert Smith

As a co-author with my Ravi Ravichandran (Vice President & CTO for Intelligence & Security at BAE Systems) and Chee-Yee Chong, I am proud to announce that we have a new paper out, entitled Artificial intelligence and machine learning: a perspective on integrated systems opportunities and challenges for multi-domain operations. The paper was presented by Ravi at the SPIE DEFENSE + COMMERCIAL SENSING conference this month and is published in the proceedings volume entitled Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III.

I think it’s a great paper for the defence community, as it places AI in a more historical context and the context of real-world defence systems. I think it’s essential in developing such systems that we get the AI Hype out of the way and focus on helping people make the real, hard decisions involved in defence. Ravi and Chee did a great job on this paper, and I’m glad to have contributed to it as well.

You have to purchase the paper to read it or watch the presentation (organisations that run conferences have to pay for themselves somehow!), but if this topic is of interest to you, you might want to!

We and AI Response to the UK Gov Race Report... by Robert Smith

Click here to read a response to the UK Governments Race and Ethnic Disparities Report 2021 by We And AI, a group for which I'm proud to serve as a trustee. Click through to read the full response, which highlights shocking elements of the report, which has probably done more damage than good. In my opinion, the government should withdraw this report, and put out something that actually reflects the realities of structural racism in the UK (including algorithmic bias). We and AI were amongst the many groups that offered evidence, but were effectively ignored in the report that that was issued


Of Zuck and Trump, on Rising Up with Sonali by Robert Smith

Just now, I was privileged enough to be interviewed by Sonali Kolhatkar for her great syndicated radio and TV show, Rising Up with Sonali. We talked about Mark Zuckerberg having called in the Ethics Panel Facebook established back in May 2020 to adjudicate the decision to indefinitely suspend one Donald J. Trump's FB account after the January 6 insurrection in Washington D.C. The Trump ban was certainly the right move, but it’s unclear that this move is towards the essential social media regulation that may eventually improve social discourse.

You can hear our conversation on KPFX (90.7 FM, Los Angeles) and various other radio stations today (Jan. 25, 2021), tomorrow on Free Speech TV.

New Podcast, Connecting Rage to Safety by Robert Smith

I’m pleased to have been featured in the Embracing Differences with Nippin Anand podcast, in an episode entitled Artificial Intelligence: Understanding The Bias Built Into Machines. A unique direction that the conversation goes towards is the issue of safety. Nippin is an expert of accident reporting, particularly around seagoing vessels. He and I share observations about how the schematization of communication, both in AI systems and safety reporting, obscure human meaning. Have a listen, and I hope you enjoy it!

Investigating the Invisible (new podcast series) by Robert Smith

I’m really proud to be in the first episode of a new podcast series called “Investigating the Invisible”, hosted by Kevin Butler (of Centigy, a firm that’s advising companies on the benefits and risks of AI in HR) and Peju Oshisanya (of BenevolentAI, a company focused on using AI to find better medicines, while avoiding bias effects).

Even prouder because this first episode - entitled AI and Bias: is AI Biased (and is that our fault)? - also features Angela Saini, whose books Inferior: How Science Got Women Wrong and Superior: The Return of Race Science (along with her great, related series with Adam Pearson on The BBC, entitled Eugenics: Science’s Greatest Scandal) are amongst my highest recommendations.

If you find our episode interesting, you should check out the rest of the series, including two other episodes that were released simultaneously. Episode 2 is called Will AI Take Everybody’s Jobs?, and features Kevin and Peju in conversation with Deepak Paramarand (from Hitachi) and Jeff Wellstead (of Big Bear Partners). Episode 3 is called Ask Phillip, with Phillip Hunter, founder of CCAI Services, who has spent 25 years in product, strategy, design, and optimization for AI-powered automation, conversational and collaborative AI.

Investigating the Invisible is a project of We and AI, a new NGO / charity (of which I am proud to be a founding Trustee) that is dedicated to increasing public awareness and understanding of AI in the UK (with a particular near-term emphasis on coping with racial bias in/with AI).

If you find that exciting, check out the episodes linked above (which you can find on all the standard podcast platforms), or have a quick listen to this 3-minute trailer.

Disinformation Inoculation: How you can act now to stop US Election Chaos by Robert Smith

I’m not going to bury the lead here: there’s information you need to share, now, particularly with your friends online with whom you most disagree. It could help save US democracy. That information is that the outcome of the US election is unlikely to be fairly known on Tuesday night, or even Wednesday morning.
Read on to find out why spreading this information, particularly beyond your own filter bubble, is vital.

You can rarely see a pandemic coming, down to within hours of it seeping into a population. But we have such a prediction today. And I believe that you can act now to impede the infection’s spread and its potential devastation.

This pandemic is one of US election disinformation that could push America over the edge, into chaos, wrecking structures, norms, and institutions in ways that we can’t even understand yet. Here’s why it could happen, why we know when it may start, and how your actions can help smother it before it takes hold.

The rare prediction is in a report from The Guardian, published just today. In essence, this US election is unique. Because of Covid19, and massive early voting in America, vote counting might take longer than usual. Since Covid19 causes more concern in densely populated areas, and because such areas are where many enthusiastic new voters are concentrated (young people, new citizens, etc.), city precincts are likely to report their votes later than usual, perhaps days later.

Combine that with the massive urban/rural divide in American political discourse: cities are far more aligned with Democrats, and rural areas go more for the GOP. Add in that the GOP presidential candidate has made strong statements that cast doubt on the validity of the American voting process this cycle. All these signs point to the following scenario:

On Tuesday night, the precincts counted could look like a win for Trump, but it might take days to discover that this is in fact, not the case.

To understand how this could sow disinformation, it’s important to realize that it’s not uncommon for American news outlets to call a state as having been won by a candidate with very few precincts reporting. It’s a questionable practice at the best of times, but one could, in the past, justify it with various statistics, exit polls, etc. But we all know that such mathematically guided guesses can be misleading, and even manipulative in the wrong hands. It’s not hard to see that such manipulation is more likely now than ever. And social media is sure to play a major part.

So, it’s likely that a disinformation storm online will happen soon after elections close on Tuesday night. It could cause chaos, maybe even violence. And we know it is coming, for once. So, what can we, the people, do about it?

The answer is for us all to act now, creating an online truth inoculation event, today. This is in line with the comments of Tom Ridge, the former Republican Governor of Pennsylvania, and former DHS Secretary under George W. Bush, who has said: “We’ve hopefully begun to inoculate and educate Americans around the necessity of patience so that every vote can be counted.” But have we? And more importantly, has that message broken out of today’s ever-present online filter bubbles?

Drawing on the research I discussed in my book, Rage Inside the Machine, we know that filter bubbles are an inevitable consequence of social network dynamics. Online messages rarely cross the dividing lines between polarised political tribes. But if we want substantial herd immunity to disinformation on election night (and in the days of counting that follow), we need as many people as possible to get the message that, this year, we need the patience to make sure every vote is counted, if we want democracy to work and continue.

How do we break this message out of the filter bubble? Our research shows that it won’t happen naturally under the configuration of current social media information personalisation algorithms. But the algorithms aren’t the only actors in social media systems. You are an actor. And you can help. Particularly those of you on or near the edge of a filter bubble. That is, those of you who are connected (or can reconnect) to friends with whom you do not politically agree.

So, here’s the thing: push the following message to all your social media friends, especially those with whom you politically disagree! Pushing this information over the edge of filter bubbles is the key to rapidly overcoming polarisation and the effectiveness of the disinformation storm that is likely to start Tuesday night.

Here’s the (non-partisan) message to share, as much as you are able, over and over again:

1) We can’t expect all the votes to be counted on election night this year, due to Covid19.

2) We won’t know the real winner for days after the election.

3) If we care about democracy, we have to be patient and ignore any early declarations of any candidate having won.

4) If we are patient, we will eventually have a true result, preserving and sustaining the US democracy that we all want to believe in.

I suggest pushing this message and continuing to push it from now until every vote is fairly counted. A relentless, filter-bubble-breaking cascade of truth could overcome this rare, predictable disinformation event.

Please share.

Edge Of Chaos: How To Build Resilience Into Our Democracy now on YouTube by Robert Smith

An Australia Institute Economics of A Pandemic Webinar Entitled Edge of Chaos: How to Build Resilience into Our Democracy, this is one of the favorite events I’ve done, covering a vital topic with Australia’s Shadow Minister for Innovation, Technology, and The Future of Work, Claire O’Neil, and Journalist and Campaigner Peter Lewis. And it’s now available on YouTube:

Edge Of Chaos: How To Build Resilience Into Our Democracy by Robert Smith

I’m really excited about this webinar, which will take place at Thursday, 1 October 2020, 2pm AEST (which is 8AM London Time, 3AM or earlier in the USA, but I believe it will be online later as a video). It’s a part of the Australian Institute’s series on “The Economics Of A Pandemic.”

Those of you who have read Rage will know that I believe the key characteristic of all effective evolving systems, which include things like media, economies, governments, and society as a whole, is maximizing diversity and mixing, while maintaining effective resilient structures. This is the so called “edge of chaos” effect.

It’ll be really interesting to talk about this with Australian MP Clare O’Neil, who is Shadow Minister for Innovation, Technology and Future of Work. The conversation is being setup by campaigner and commentator Peter Lewis, Director of Essential Media and the Australian Centre for Responsible Technology. Peter’s planning on this has been great, and I really expect it to be a special event. I hope you are able to join!

Gender Inequality and AI Technology (a Mirza Blog) by Robert Smith

My first blog piece for and about my new company Mirza is now up. In my opinion, there’s a strong match between Mirza’s mission (gender equality) and my book Rage. And the “AI” tech we’ll be using there will be in service of aiding people in making important, complex human decisions for their lives, in the positive way I think the book is pointing towards. Hope you enjoy the post, which you can read here. You can also sign up on the site as a beta tester for Mirza, and get their newsletter, which is consistently informative and fabulous.

Are Smartphones Causing Psychosis? by Robert Smith

There’s a new article out by Nichi Hodgson in the online magazine The Critic entitled Psychosis in an Age of Surveillance Capitalism. Nichi interviewed me for the piece, and there is a quote from me in it, but that is by no means why I am encouraging people to read it. It is a fascinating exploration of the role smartphones are playing in mental health, particularly those who experience psychotic episodes. It explores everything from the paranoia that arises when one thinks their phone is spying on them (and remember, just because you are paranoid doesn’t mean you are wrong), to mental health apps that try to treat psychosis, to the privacy and health concerns that arise from these phone-based treatments. It’s a fascinating and important read, sincerely.

Final AI Quote Comment: Is AI the new Fire, or Electricity? by Robert Smith

Finally (after many delays!) we come to the final entry in my series of comments about quotes from a Forbes article by Rob Toews. It’s a great one to end on, I believe:

“Artificial intelligence is one of the most profound things we're working on as humanity. It is more profound than fire or electricity.”

Those words come from Sundar Pichai, the CEO of Alphabet Inc., and its subsidiary Google LLC.

In considering this quote, we have to think about the nature of fire and electricity, and how profound they really are for humanity, First of all, I assume that Mr Pichai is talking about the discovery of how to use fire and electricity, not just their existence. Even discovery doesn’t quite work, as both fire and electricity have always existed. What Mr Pichai must mean is the harnessing of these natural phenomena. AI is different from these two things because (hint’s in the name) it is entirely humanmade, rather than being a natural phenomenon that man must learn to harness.

Artificial can mean two different things in English: one is in the sense of artificial light. Light from a lamp is just light, but it is artificial in that a human made the lamp. In contrast, natural light comes from the sun, which isn’t humanmade.

The other meaning is in the sense of artificial flowers. Even the most beautiful of these are not flowers, but humanmade imitations of flowers.

One can argue that all fire and electricity are natural. Humanmade devices may initiate them, but they certainly aren’t imitations. AI isn’t the same. I’d argue that it is all as artificial as flowers made of plastic or silk.

In considering the quote in this light, it’s probably easier to start by talking about how man learned to harness electricity and consider fire later. Humanity observed and played with static electricity since at least 600BC. Creating batteries and circuits of flowing electricity descends from the late 18th century, and practical use (first to provide light in people’s homes) followed in the 19th century. Maxwell created the mathematics that describes the intrinsic physical link between electricity and magnetism in 1865, which paved the way for Einstein’s theory of relativity in the first few years of the 20th century. The rest, as they say, is history, and the entirety of modern physics and technology rests primarily on humanity understanding and harnessing electricity.

However, at its roots, the profundity of the impacts of electricity on humanity rests on a straightforward thing. Clarke and Kubrick graphically illustrate what this thing is in the early scenes of 2001: A Space Odyssey.

For those who haven’t seen it, the film opens with ape-like protohumans struggling to survive when a giant black monolith appears, which they marvel at and touch. Soon after, one of the hominids who touched the strange object slays a beast using a bone as a bludgeon. In celebration, the creature throws the bone into the air. As it turns in slow motion it cinematically dissolves into a spacecraft. The monolith prompted one crucial change, the discovery of action at a distance, and that change took the hominids from the discovery of tools to interplanetary space flight.

The profundity of electricity is action at a distance. Mechanical, hydraulic, and steam devices were already on the way to where electricity eventually took us with action at a distance. It’s just that because of physics, electricity (and the electromagnetic waves Maxwell described) do action at a distance really well. With them, we can send power and signals over long distances, at nearly the fastest speed possible. Pretty profound.

But not as profound, I think, as fire. Like electricity, fire has always existed. People probably learned how to make it as soon as they saw the powerful things it could do. But it’s profound effect is that once humanity was able to use it in one of the most simple ways possible, it changed what humanity was. That use of fire is cooking.

For every creature on Earth, there’s a balance sheet for food. There’s an energy cost of obtaining it, and digesting it, and thus that food better contain enough energy to overcome those costs and leave of bit more for things like reproduction, or a species is doomed.

That is unless you can use fire. Fire allowed humanity to exploit external energy sources to breakdown all sorts of things into digestible food. This made food easier to obtain, and less costly (in terms of energy) to digest. The resulting energy surplus could be used for all sorts of things, like making brains bigger and more creative, so recipes got better.

Like the action-at-a-distance transition from bone axes to spacecraft, history followed on from fire. Humanity’s creation of cuisine led to our using excess energy for all sorts of creations, not just for survival, but for enrichment and pleasure. Burning things to make our lives easier hasn’t had entirely positive consequences, but fire certainly changed us from a creature who just managed to survive, to animals that manage to create.

So the question is, is AI more profound than electricity and fire?

AI certainly creates a new kind of action-at-a-distance. It means that the way we represent the world, and the way we calculate about it can translate into decisions that we can’t fully foresee, through systems that we can’t fully comprehend, and whose actions we can’t fully anticipate.

Will AI change what we are, the way that food allowed us to fuel bigger brains, and create new ways of interacting with the world? In reality, that has been going on since we developed artefacts of all kinds. But the computational devices we now carry with us, which extend the actions of the assumptions of their creators to distances that reach everyone, are clearly changing the way we see and react to our world, and each other. That is profound. And dangerous.

This is the reason that it is so important to realise what the word “artificial” means in artificial intelligence. It is not like artificial light: that is to say, real, human-like intelligence, just disembodied and made by man. Instead, it is like artificial flowers: an imitation, that looks a bit like the real thing but contains none of its complexity and substance.

AI is changing who we are, that’s inevitable. But as we change we should realise that we bring innate capabilities to decision making that have been evolving since someone first lit up a fire to cook a beast they’d slaughtered with a bone axe. Our large brains are the most complex thing in the universe, and the product of millions of years of adapting to a complex, uncertain world, that doesn’t yield easily to mere computation. Our ability to use electricity to compute at near light speed will enhance us, but it will not replace us. We all need to understand that to cope with the profound changes AI is causing today.

Proud Purple Principle Participant! by Robert Smith

Certainly, one of the favourite podcasts I’ve ever participated in, The Purple Principle is dedicated to reducing partisanship in American political discourse. This mission couldn’t be more relevant today (08/2020, just in advance of the US presidential election).
This episode features a full-length interview with me, on the role that algorithms play in that polarisation. Visit their site, then listen to episode 6 (from 08/2020) on your podcast provider.

Hardly Working Podcast by Robert Smith

I really enjoyed talking to Brent Orrell for his podcast Hardly Working. Here’s his blurb on the episode, which is out now:

“Technology has been rapidly advancing, and along with it has come an increased reliance on artificial intelligence, algorithms, and other forms of computer programming. Can we trust these programs to uphold our values of inclusion, diversity, and fairness?

Brent talks to Robert Elliot Smith, an artificial intelligence expert and author of Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All, about the flaws of, and history behind, these complex and increasingly influential tools.”

Have a listen, I think we covered some really interesting ground.

Purple Principle Podcast by Robert Smith

I sincerely believe this may be the best clip from an interview with me that I’ve heard. It comes from The Purple Principle, a podcast dedicated to reducing partisanship in American political discourse. After you listen to the 6 minute clip, you probably will want to listen to the whole podcast, which features me along with Keith Poole (Professor Emeritus, U. of Georgia), Jason Altmire (Former 3-term Congressman, Pennsylvania), Abigail Marsh (Professor Psychology and Neuroscience, Georgetown), John Opdycke (President, Open Primaries), Laura Sibilia (Legislator, Vermont General Assembly), Charles Wheelan (Co-Founder, Unite America), and Myq Kaplan (Stand-up Comedian and Podcaster). You can find it on AppleSpotifyGoogle PodcastsStitcherand Pandora. Here’s that 6 minute clip: