Two Gigs at Blue Dot by Rob Smith

Glad to say I'll be doing two gigs at The Blue Dot Festival (July 7-9 at the marvelous Jodrell Bank Discovery Centre, with the awesome Lovell Telescope). The first (on Saturday the 8th at 11:00AM) will be a second round of the panel discussion on the how A.I. will impact on our future that I participated in at Transmission London (sponsored by Salon London), with Prof Arthur I Miller and Prof Steve FullerThe second (on Sunday the 9th at 15:00) will be a little talk I call The Banality of A.I. (in reference to Hannah Arendt's The Banality of Evil).Should be fun...hope to see you there.

RIP Robert Pirsig... by Rob Smith

It is with much sadness that I today read of the death of Robert Pirsig. I believe his philosophical novel Zen and the Art of Motorcycle Maintenance is the one book that's had the most influence on my life. It's examination on what is meant by quality touches every aspect of my own perspective, on life, my work, and yes even on A.I.I can still pick it up and feel instantly moved by randomly selected pages. I hope that everyone gets a chance to read it someday, and gets as much from it as I did.Rest in Peace, Mr. Pirsig. 

UK Parliament considers what the A.I.s are up to... by Rob Smith

The UK House of Commons Science and Technology Committee has (very appropriately, in my opinion) launched an investigation into the use of algorithms in public decision making. They asked prominent Universities, including UCL, to provide opinions on the matter, and I was glad to put some comments in, which (along with comments from many other UCL authors) resulted in a Parliamentary  Evidence document on the subject, which some of you may find of interest.

Listen to The Transmission... by Rob Smith

For those of you who might be interested, here's a SoundCloud recording of my recent appearance discussing A.I. at Transmission, the year end event from London Salon, on February 2, 2017 at The Hospital Club.As was discussed in a previous post, I was onstage discussing the realities of A.I. with great speakers Prof Arthur I Millerwho believes that computers will be artists someday, and Prof Steve Fullerwho apparently wants to live forever. I was the nay-sayer, as I think that A.I. may likely never be human equivalent in the ways that matter, but despite this, they are already our overlords.

Automatic for Transmission by Rob Smith

This wasn't just an excuse to post a Joy Division video. I'm proud to announce that I'll be a part of Salon London's marvelous Transmission event, Thursday February 2, 2017, at the marvelous Hospital Club in London.I'll be onstage discussing the realities of A.I. with great speakers Prof Arthur I Millerwho believes that computers will be artists someday, and Prof Steve Fullerwho apparently wants to live forever. Myself, I think that A.I. may likely never be human equivalent in the ways that matter, but despite this, they are already our overlords.Come to Transmission to see what we all mean.

News and Narratives in Financial Systems (now on Video!) by Rob Smith

There's now a video online of a presentation (starts at around 14:30, where I've cued up the video link) given by Sujit Kapadia  from the Band of England of a paper entitled "News and Narratives in Financial Systems: Exploiting Big Data for Systematic Risk Assessment", which documents some of my work with Sujit and David Gregory (formerly at the BoE) and my UCL colleagues David Tuckett, Rickard Nyman, and Paul Ormerod. Interesting stuff about how human emotion is the key driver or markets, not the other way around.

Fiords and Finances by Rob Smith

Glad to announce that I'll be presenting a paper entitled "News and narratives in financial systems: Exploiting big data for systemic risk assessment" by myself, my colleagues from UCL (Rickard Nyman, Paul Ormerod and David Tuckett) along with colleagues from The Bank of England (David Gregory and Sujit Kapadia) at The Workshop on Financial Stability and Macroprudential Policy (sponsored by The Central Bank Research Association and The European System of Central Banks).This will be my first visit to Norway.But I largely posted this so I could say one of my favorite names: Slartibartfast.

Referendums, Models, Democracy, and A.I.s by Rob Smith

A friend of mine recently posted a link to a ComRes poll, published in the Sunday Mirror, taken on the Sunday after the Brexit vote. The opening paragraph of the article that presents the poll results reads:

the public is more likely to think that the existing result should stand and Britain should leave (50%), than think a second referendum should be held (39%).

I was surprised by this, until I actually read the results.When asked the whether they agreed with the statement "The result of the existing referendum should be honoured and Britain should leave the EU" unsurprisingly 78 percent of Remain voters disagreed. The surprise is that  21 percent of the Leave voters disagreed as well!Note that Leave won 52 percent to 48 percent, roughly. So, if only 4 percent of Leave voters reconsidered, and changed their vote, it would have been a tie. And, from the phrasing of the question, and the response from Leave voters, it would seem that's a real possibility, only three days after the referendum.Yet, this seems not to be the case based on the ComRes poll's opening paragraph. Why? In case you haven't clocked it yet, it's because what they've done is averaged Remain voters agreement with the statement above (a total 100-78 percent) and Leave voters agreement with the statement above (a total of 100-21 percent) to come up with about 50 percent, and the presented this numerical conclusion as the idea that

the public is more likely to think that the existing result should stand and Britain should leave (50%)

Do I think what the Mirror and ComRes has done in their first paragraph is misleading: yes. But I don't want that thought to obscure an even larger point, and that is that all polls (including referendums) have representational bias, particularly when the polls concern complex human issues.Statistical interpretations involve summarizing over things, by their very nature. And when those things are the thoughts of people about complex issues, something is always lost. Reducing things to a few a simple questions, or one, reduces complex issues to a simple percentage of votes, in effect an average response.As G.E.P. Box famously said, all models are wrong, some models are useful. Polls are statistical models of complex human matters.It is by this fact that polls are a blunt instrument for research in social sciences, and as a tool in governance.I'd say that putting polls in charge is actually like putting an A.I. in charge, and we have to note that not only does current technology makes this possibility tempting, it makes the likelihood of being sold more governance-by-referendum, and perhaps even governance-by-data analysis, a distinct possibility. We have to be wary of this temptation.Polls, like A.I.s, are a formal systems that ask pat questions, and reduce complex issues to numbers, check boxes, and the interpretations of the statistics that result. Polls, like many modern A.I.s, draw on statistics from humans (e.g., Big Data analysis) to determine a conclusion. The new convenience of such systems (be they Big Data A.I.s or polls) can mask the inevitable representational biases that formal systems cannot avoid. That's bad enough, but one must also be aware that these biases can become self reinforcing, and that they can also be manipulated by those who create the systems (polls) and those who interpret their conclusions.This is the reason that we need to sustain the idea of community-based indirect democracy. Direct democracy based on massive referendums or Big Data Analysis will always be something more like mob rule than real representation of people's interests, due the the simplifications and biases that formal systems always induce, and the biases and interpretations that can be manipulated by those who control those systems. Electing a real person, who represents a community of constituents small enough to really build relationships with, is a protection against this, and allows for the creation of real, responsive, human democracy.Let's hope that this blunt referendum result can be turned into something that's more responsive to what people really want and need. And let's all write our M.P.s.  

"Also" this... by Rob Smith

I'm also glad to announce that I'll be giving a talk (on how not to believe the AI hype, and see the real monsters) at The Also Festival this summer (17th-19th June in Warwickshire). Lots of other cool speakers there too, as well as music and so many other things to do that the mind boggles. I can't wait!

"Idealizations" paper now fully accepted and published by Rob Smith

Glad to announce that the paper discussed in this previous post, entitled "Idealizations of Uncertainty and Lessons from Artificial Intelligence" is now fully accepted by the online journal Economics. I want to thank the reviewers, and my colleague David Tuckett, for their invaluable contributions that have made this a much better paper.Some of you will be interested to know I've made added brief comments to the paper about AlphaGo, Google's program that beat a human master player in 4 out of 5 games. I hope to get time to write a more thorough blog post on AlphaGo.The hype around this "breakthrough" is driving me insane. I've read the full academic paper, and I believe I can can easily explain what the program does.For now, just know that AlphaGo is just massive computation to assign numerical values to board configurations (based on computer game playing math that originated in the late 50s), then massive lookahead through those numbers, to come up with good moves, nothing more. To use words like "intuition" to describe what it is doing (as was done in Forbes) is the worst sort of wishful mnemonics. The fact that one of the programmers of AlphaGo used this word is self-deluded and misleading, and the fact that Forbes presented that statement without critical evaluation just shows how people aren't really thinking about what it means for machine to "think".I must finish my book, and soon.

John Holland, RIP by Rob Smith

It is with a sadness shared by many of my scientific friends and colleagues that I learned of the passing of John Holland, who is in some sense, my intellectual grandfather. My PhD advisor, Dave Goldberg, was advised by John Holland, and it is in this line of unconventional scientific curiosity that I was lucky enough to be raised.I remember well, when I was an awkward graduate student at one of my first conferences, in the lunch hall at a table of strangers, all graduate students from better-named Universities, all having accents that had less regional baggage. I was clumsily attempting to tell them what I was doing with Genetic Algorithms (GAs) in my Master's Degree, when John, the father of GAs, walked up, put a friendly hand on my back, and complimented me on a recent paper. That made all the difference to my confidence that day, and ever since. John treated me as he did all others with scientific curiosity: as an equal colleague and friend, regardless of status or rank.John played a seminal role that extends well beyond the creation of a class of algorithms: he was key in creating a way of thinking about the world, what we now call Complex Systems Science: the study of systems that demonstrate behaviours that aren't well treated by reductive models. I feel that this way of thinking about the world is shifting the entire scientific endeavour in a way that will touch literally every human being's life in the coming millennia.John will be missed, but his intellectual and personal legacy, which helped train Dave, and then me, and so many others, will echo throughout time. RIP, John Holland.

Uncertainty has become a Wishful Mnemonic by Rob Smith

New paper out for discussion, this one in E-conomics, entitled "Idealizations of Uncertainty, and Lessons from Artificial Intelligence". In it I'm trying to make a point about how the modelling of human uncertainty via probability theory isn't really descriptive science, and how the history of AI teaches us what to expect from such models.For engineering AI systems, whether probability theory is descriptive of humans doesn't matter: who cares if it isn't how people think, as long as it does something useful. Those who use real-world AI systems have learned their limitations, relegating the idea of "expert systems" to "decision support system", and realizing you gotta put a person between the AI and the real decision.But for economic modelling of the human agents that make decisions in the economy, it's another matter, and we need to be aware of the brittleness of old AI, the intractability of scaling up knowledge bases, and the temptation of wishful mnemonics: words that wishfully call a computational construct by the name of human characteristic, while the real similarity isn't established scientifically.Uncertainty, when modelled with probabilities, is certainly a wishful mnemonic. Uncertainty isn't a phenomenon in the world, it is a phenomenon in our minds: the world isn't uncertain, we are uncertain about the world. The fact is we don't have evidence that humans reason with probabilities (in fact, much of the evidence is to the contrary). The reality is we are as unable to build comprehensive probabilistic models of human uncertainty as we are unable to build comprehensive logical models of human expertise, as we discovered 30-40 years ago. Double ditto for models of economics, because economics is all about human actors making decisions under uncertainty.Many economist want a world of crisp, well-informed, rational decision makers, partly because it leads to nice notions of equilibria, which nicely connect to models of free market economics "optimizing value". But that world is not our world. We need to start seeing the real behaviour of people, with all their complexities and emotions, as adaptive and filled with interesting emergent behaviours, but not optimized. This may mean that economics will need to deal less with "value", and more with human values.The great statistician G. E. P. Box said

All models are wrong, some models are useful

I think we really need to start understanding that are models of human decision making under uncertainty are wrong, and try to understand where they are and aren't useful. I hope this paper helps in that understanding.And, BTW, Ex Machina rocks. 

The Florence Manifesto (A New Paper) by Rob Smith

I'm very pleased to announce the release of a new paper in the journal Critical Review, entitled "Uncertainty, Decision Science, and Policy Making: A Manifesto for a Research Agenda" of which I'm one of many authors (with my name slightly mis-spelled, how'd I miss that!).I'm pretty proud of this one. It's the outcome of a very interesting conference I attended about a year ago in Florence, sponsored by an EU FP7 grant on Global System Dynamics and Policy. The paper talks about how the rigid models of how people make decisions are at the core of problems in economics, policy-making, and other difficulties where social science is treated with idealized mathematical models that just don't reflect the realities of how people behave. Problems like the financial crisis of 2008.This one is really worth a read, even for non-technical people with an interest in the way the world is governed, I think. Hope some people enjoy reading it as much as I enjoyed participating in its creation.

When the rapture comes, Google's cars will be unmanned... by Rob Smith

A friend sent me this article from the NYT the other day, 'cause he knows I work in AI. I read it and Public Enemy starting playing in my head. 

I've just got to say it: those academics and journalists who say AI is oh-so-close to becoming ubiquitous in day-to-day life these days, replacing humans in many tasks, are just echoing hype that's been coming around about every 5 years since the 1950s (actually, since Babbage, and even Liebniz). But this time around, the hype is backed by some of the most successful companies in the world, who provide lots of services that we depend on and who we trust. And that makes this hype lots more dangerous than in the past. So a brother gotta represent.

The NYT article is a few years old, but it's a great example of the AI hype that the media is dishing out thick and fast nearly every day lately:

"The scientists and engineers at the Computer Vision and Pattern Recognition conference are creating a world in which cars drive themselves, machines recognize people and “understand” their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues."

Bollocks.

A reporter being impressed at yet another in the endless series of academic conferences with gee-whiz results proves nothing new, except that this is a very marketable story these days.

The prime example of how marketable comes in the form of the 

driverless car hype

, which is just taken as true these days, in report after report. A few days ago an article came out with a headline saying that

driverless cars are now going on the roads

. Except that if you check under the hood of this widely report story, you find out that these "cars" are really more like golf carts, have a top speed of 25 miles per hour, and are only capable of driving on certain routes in the small town of Mountain View, CA, which Google has hi-res 3-D scanned and data processed, at great expense, utilising lots of human effort as well as big data crunching. Those roads are largely where the "driverless cars" have logged the "millions of miles" everyone is talking about.

The reality is that in these new "road-ready" vehicles a

driver

 will have to be present at all times while they are moving in normal, non-regulated conditions. So this is just complicated cruise control, ala Google.

Will AI assist people in some driving tasks? Sure.

It already does so in parking, in controlling speed, in avoiding collision in near-miss emergencies, etc.

Those assistive ideas will continue to advance.

But think about it: we've had very sophisticated autopilots in planes for years, and planes have special rules for staying very, very far away from each other, in a space whose only unforeseen obstacle is wind and a very, very occasional bird. Even in that clear-sailing, low-density world, we have human air traffic controllers constantly watching like hawks. As a matter of law and practicality we absolutely do not let planes fly themselves, at all. Even drones are really flown by people, no matter how much technology aids them.

As a side issue, this is precisely why the reports of impending drone deliveries from Amazon are hype. This will not happen, except perhaps over the Antarctic or Outback: airspace laws won't allow it, and even if they did, the expert manpower load (think of the guys who fly military drones, but in a huge workforce that makes sure every geek on Earth quickly gets the latest X-men release on DVD) make this nonsense a practical impossibility. Autonomous drones won't be allowed, and military-style remote controlled drones are commercially farcical.

Back to the road: In our future, will people be sitting in a car, watching it drive itself? Nope. It'll never be approved legally or for liability.

The truth is we can't get people to drive without looking at their phones when they

actually are required to control the damned cars

, much less when they are the emergency backup system in a driverless vehicle. Who will insure these things? Who will change law to allow them to drive on our roads? Answer: no one. Or at least no one who is not tricked by this ridiculous load of hype. If we do get talked into this idea, it won't be long before it is crushed, for some very good reasons. But I think people are sensible and just plain scared enough, that when the rubber hits to road on this development, it will run straight into a wall.

What about with some tech-enabled limits and controls: cars driving themselves without a driver, to come pick you up and take you to your destination, or deliver heavy goods, say? This will only happen if we put those vehicles in specially reserved lanes on certain pre-planned routes, with lots of tight controls on what the cars can do.

So hey: I've got an idea: let's just put down rails, and add the necessary human operator (local or remote) to cover the unforeseen. Then you've got yourself what you call a

train

.

Which is a far better technology for these purposes anyway. The world needs less cars, not more: it seems that everyone is forgetting that. Cars have been a social disaster, and have degraded the quality of life and transport dramatically around the world. Think how much worse driverless cars will make this problem. Are A-holes with SUVs not enough of an irritation to you. Just think of those same cars, but with no human to blow your horn at, just some rich guy in the backseat reading

Cigar 

Aficionado

 while Google helps him cut you up.

London has had the good sense to restrict cars, and even London has even more problems  to solve in this area (like HGVs

with drivers

 that can't coexist non-lethally with eco-friendly bikes). I don't see London, or any of the other polluted, near-gridlocked cities of the world moving towards driverless cars, unless they are really the trains I've made note of.

But back to AI. AI is helping advance lots of important areas (like medicine and healthcare), but the reality is it's doing it in ways that have almost nothing to do with the way human intelligence works. The truth is that AI is really successful in supplementing human intelligence in some well-posed settings, but not very good at replacing human intelligence in any non-trivial human decision making settings. Most people really don't understand these fact, and that's what I'm trying to write about these days.

Why do companies like Google and Amazon want us to believe this hype? Perhaps it's just because they believe it themselves. Perhaps it's an irrational arrogance of the newly rich and powerful. Or perhaps it's just a brand prestige manoeuvre.

But in any case, it's hype.  

Terminator X, why don't you tell what time it is, boyeeeee! 

Co-Author on Paper at INET'15 by Rob Smith

Glad to be a co-author on a new paper, presented by my friend and colleague David Tuckett, at the 2015 Conference of the Institute of New Economic Thinking.The paper is about a new theory of decision-making (Conviction Narrative Theory, CNT) and a new methodology for economic prediction (Directed Algorithmic Text Analysis, DATA). In many ways, it's an attempt to operationalize things like Keynes' theory of long-term expectations and Soros'theory of reflexivity, making use of the new data sources and computer methodologies that are available now. It shows what I think are exciting new results, about how measures of people's conviction emotions, drawn from text sources, lead the technical variations of the economy. Animal spirits in action.

Mafia in Africa (more work from an amazing student) by Rob Smith

Mafia_in_Africa_-_correctiv_org

Mafia_in_Africa_-_correctiv_org

 Fascinating work on how the Italian Mafia has penetrated countries in Africa, from a group including Stefano Gurciullo, a researcher who I co-supervise as a PhD student in Political Science and Computer Science at UCL. This builds on Stefanos past work using social network analysis to examine the Mafia's penetration of businesses in his homeland of Sicily. Stefano's PhD project (with his primary supervisor Slava Mikhaylov) is on new models of how economic shocks affect the stability of the international banking system, and is in collaboration with researchers at The Bank of England.I'm very lucky to have such fascinating and brave students to work with!