This wasn't just an excuse to post a Joy Division video. I'm proud to announce that I'll be a part of Salon London's marvelous Transmission event, Thursday February 2, 2017, at the marvelous Hospital Club in London.I'll be onstage discussing the realities of A.I. with great speakers Prof Arthur I Millerwho believes that computers will be artists someday, and Prof Steve Fullerwho apparently wants to live forever. Myself, I think that A.I. may likely never be human equivalent in the ways that matter, but despite this, they are already our overlords.Come to Transmission to see what we all mean.
Uncategorized
News and Narratives in Financial Systems (now on Video!) /
There's now a video online of a presentation (starts at around 14:30, where I've cued up the video link) given by Sujit Kapadia from the Band of England of a paper entitled "News and Narratives in Financial Systems: Exploiting Big Data for Systematic Risk Assessment", which documents some of my work with Sujit and David Gregory (formerly at the BoE) and my UCL colleagues David Tuckett, Rickard Nyman, and Paul Ormerod. Interesting stuff about how human emotion is the key driver or markets, not the other way around.
How the A.I.s Elected Trump /
Fiords and Finances /
Glad to announce that I'll be presenting a paper entitled "News and narratives in financial systems: Exploiting big data for systemic risk assessment" by myself, my colleagues from UCL (Rickard Nyman, Paul Ormerod and David Tuckett) along with colleagues from The Bank of England (David Gregory and Sujit Kapadia) at The Workshop on Financial Stability and Macroprudential Policy (sponsored by The Central Bank Research Association and The European System of Central Banks).This will be my first visit to Norway.But I largely posted this so I could say one of my favorite names: Slartibartfast.
Attitude at Latitude /
I'm excited to be talking about A.I. at this weekend's Latitude Festival. The conversation will be between me and Prof Arthur I. Miller (author of numerous interesting books and Professor Emeritus of History and Philosophy of Science at University College London). Our discussion will be led by the esteemed Helen Bagnall, host of London's Salon events.
Referendums, Models, Democracy, and A.I.s /
A friend of mine recently posted a link to a ComRes poll, published in the Sunday Mirror, taken on the Sunday after the Brexit vote. The opening paragraph of the article that presents the poll results reads:
the public is more likely to think that the existing result should stand and Britain should leave (50%), than think a second referendum should be held (39%).
I was surprised by this, until I actually read the results.When asked the whether they agreed with the statement "The result of the existing referendum should be honoured and Britain should leave the EU" unsurprisingly 78 percent of Remain voters disagreed. The surprise is that 21 percent of the Leave voters disagreed as well!Note that Leave won 52 percent to 48 percent, roughly. So, if only 4 percent of Leave voters reconsidered, and changed their vote, it would have been a tie. And, from the phrasing of the question, and the response from Leave voters, it would seem that's a real possibility, only three days after the referendum.Yet, this seems not to be the case based on the ComRes poll's opening paragraph. Why? In case you haven't clocked it yet, it's because what they've done is averaged Remain voters agreement with the statement above (a total 100-78 percent) and Leave voters agreement with the statement above (a total of 100-21 percent) to come up with about 50 percent, and the presented this numerical conclusion as the idea that
the public is more likely to think that the existing result should stand and Britain should leave (50%)
Do I think what the Mirror and ComRes has done in their first paragraph is misleading: yes. But I don't want that thought to obscure an even larger point, and that is that all polls (including referendums) have representational bias, particularly when the polls concern complex human issues.Statistical interpretations involve summarizing over things, by their very nature. And when those things are the thoughts of people about complex issues, something is always lost. Reducing things to a few a simple questions, or one, reduces complex issues to a simple percentage of votes, in effect an average response.As G.E.P. Box famously said, all models are wrong, some models are useful. Polls are statistical models of complex human matters.It is by this fact that polls are a blunt instrument for research in social sciences, and as a tool in governance.I'd say that putting polls in charge is actually like putting an A.I. in charge, and we have to note that not only does current technology makes this possibility tempting, it makes the likelihood of being sold more governance-by-referendum, and perhaps even governance-by-data analysis, a distinct possibility. We have to be wary of this temptation.Polls, like A.I.s, are a formal systems that ask pat questions, and reduce complex issues to numbers, check boxes, and the interpretations of the statistics that result. Polls, like many modern A.I.s, draw on statistics from humans (e.g., Big Data analysis) to determine a conclusion. The new convenience of such systems (be they Big Data A.I.s or polls) can mask the inevitable representational biases that formal systems cannot avoid. That's bad enough, but one must also be aware that these biases can become self reinforcing, and that they can also be manipulated by those who create the systems (polls) and those who interpret their conclusions.This is the reason that we need to sustain the idea of community-based indirect democracy. Direct democracy based on massive referendums or Big Data Analysis will always be something more like mob rule than real representation of people's interests, due the the simplifications and biases that formal systems always induce, and the biases and interpretations that can be manipulated by those who control those systems. Electing a real person, who represents a community of constituents small enough to really build relationships with, is a protection against this, and allows for the creation of real, responsive, human democracy.Let's hope that this blunt referendum result can be turned into something that's more responsive to what people really want and need. And let's all write our M.P.s.
"Also" this... /
I'm also glad to announce that I'll be giving a talk (on how not to believe the AI hype, and see the real monsters) at The Also Festival this summer (17th-19th June in Warwickshire). Lots of other cool speakers there too, as well as music and so many other things to do that the mind boggles. I can't wait!
Join me in The Wilderness /
I'm glad to announce that I'll be one of the speakers at The Wilderness Festival this summer (August 4th - 7th 2016, Cornbury Park, Oxfordshire), in a session entitled "THE LAST INVENTION WE EVER MAKE?", on AI and the future. The festival looks to be a great one, so I really hope to see some of you there!(and check the stunning music lineup!)
"Idealizations" paper now fully accepted and published /
Glad to announce that the paper discussed in this previous post, entitled "Idealizations of Uncertainty and Lessons from Artificial Intelligence" is now fully accepted by the online journal Economics. I want to thank the reviewers, and my colleague David Tuckett, for their invaluable contributions that have made this a much better paper.Some of you will be interested to know I've made added brief comments to the paper about AlphaGo, Google's program that beat a human master player in 4 out of 5 games. I hope to get time to write a more thorough blog post on AlphaGo.The hype around this "breakthrough" is driving me insane. I've read the full academic paper, and I believe I can can easily explain what the program does.For now, just know that AlphaGo is just massive computation to assign numerical values to board configurations (based on computer game playing math that originated in the late 50s), then massive lookahead through those numbers, to come up with good moves, nothing more. To use words like "intuition" to describe what it is doing (as was done in Forbes) is the worst sort of wishful mnemonics. The fact that one of the programmers of AlphaGo used this word is self-deluded and misleading, and the fact that Forbes presented that statement without critical evaluation just shows how people aren't really thinking about what it means for machine to "think".I must finish my book, and soon.
John Holland, RIP /
It is with a sadness shared by many of my scientific friends and colleagues that I learned of the passing of John Holland, who is in some sense, my intellectual grandfather. My PhD advisor, Dave Goldberg, was advised by John Holland, and it is in this line of unconventional scientific curiosity that I was lucky enough to be raised.I remember well, when I was an awkward graduate student at one of my first conferences, in the lunch hall at a table of strangers, all graduate students from better-named Universities, all having accents that had less regional baggage. I was clumsily attempting to tell them what I was doing with Genetic Algorithms (GAs) in my Master's Degree, when John, the father of GAs, walked up, put a friendly hand on my back, and complimented me on a recent paper. That made all the difference to my confidence that day, and ever since. John treated me as he did all others with scientific curiosity: as an equal colleague and friend, regardless of status or rank.John played a seminal role that extends well beyond the creation of a class of algorithms: he was key in creating a way of thinking about the world, what we now call Complex Systems Science: the study of systems that demonstrate behaviours that aren't well treated by reductive models. I feel that this way of thinking about the world is shifting the entire scientific endeavour in a way that will touch literally every human being's life in the coming millennia.John will be missed, but his intellectual and personal legacy, which helped train Dave, and then me, and so many others, will echo throughout time. RIP, John Holland.
How Dated Theories & Underlying Research Misguide Policy /
Good blog post at The Institute for New Economic Thinking (INET) by my friend, co-author, and colleague at The UCL Centre for the Study of Decision-Making Uncertainty, David Tuckett.This post is about our "Florence Manifesto" paper, which I previously blogged about.
Uncertainty has become a Wishful Mnemonic /
New paper out for discussion, this one in E-conomics, entitled "Idealizations of Uncertainty, and Lessons from Artificial Intelligence". In it I'm trying to make a point about how the modelling of human uncertainty via probability theory isn't really descriptive science, and how the history of AI teaches us what to expect from such models.For engineering AI systems, whether probability theory is descriptive of humans doesn't matter: who cares if it isn't how people think, as long as it does something useful. Those who use real-world AI systems have learned their limitations, relegating the idea of "expert systems" to "decision support system", and realizing you gotta put a person between the AI and the real decision.But for economic modelling of the human agents that make decisions in the economy, it's another matter, and we need to be aware of the brittleness of old AI, the intractability of scaling up knowledge bases, and the temptation of wishful mnemonics: words that wishfully call a computational construct by the name of human characteristic, while the real similarity isn't established scientifically.Uncertainty, when modelled with probabilities, is certainly a wishful mnemonic. Uncertainty isn't a phenomenon in the world, it is a phenomenon in our minds: the world isn't uncertain, we are uncertain about the world. The fact is we don't have evidence that humans reason with probabilities (in fact, much of the evidence is to the contrary). The reality is we are as unable to build comprehensive probabilistic models of human uncertainty as we are unable to build comprehensive logical models of human expertise, as we discovered 30-40 years ago. Double ditto for models of economics, because economics is all about human actors making decisions under uncertainty.Many economist want a world of crisp, well-informed, rational decision makers, partly because it leads to nice notions of equilibria, which nicely connect to models of free market economics "optimizing value". But that world is not our world. We need to start seeing the real behaviour of people, with all their complexities and emotions, as adaptive and filled with interesting emergent behaviours, but not optimized. This may mean that economics will need to deal less with "value", and more with human values.The great statistician G. E. P. Box said
All models are wrong, some models are useful
I think we really need to start understanding that are models of human decision making under uncertainty are wrong, and try to understand where they are and aren't useful. I hope this paper helps in that understanding.And, BTW, Ex Machina rocks.
The Florence Manifesto (A New Paper) /
I'm very pleased to announce the release of a new paper in the journal Critical Review, entitled "Uncertainty, Decision Science, and Policy Making: A Manifesto for a Research Agenda" of which I'm one of many authors (with my name slightly mis-spelled, how'd I miss that!).I'm pretty proud of this one. It's the outcome of a very interesting conference I attended about a year ago in Florence, sponsored by an EU FP7 grant on Global System Dynamics and Policy. The paper talks about how the rigid models of how people make decisions are at the core of problems in economics, policy-making, and other difficulties where social science is treated with idealized mathematical models that just don't reflect the realities of how people behave. Problems like the financial crisis of 2008.This one is really worth a read, even for non-technical people with an interest in the way the world is governed, I think. Hope some people enjoy reading it as much as I enjoyed participating in its creation.
When the rapture comes, Google's cars will be unmanned... /
A friend sent me this article from the NYT the other day, 'cause he knows I work in AI. I read it and Public Enemy starting playing in my head.
I've just got to say it: those academics and journalists who say AI is oh-so-close to becoming ubiquitous in day-to-day life these days, replacing humans in many tasks, are just echoing hype that's been coming around about every 5 years since the 1950s (actually, since Babbage, and even Liebniz). But this time around, the hype is backed by some of the most successful companies in the world, who provide lots of services that we depend on and who we trust. And that makes this hype lots more dangerous than in the past. So a brother gotta represent.
The NYT article is a few years old, but it's a great example of the AI hype that the media is dishing out thick and fast nearly every day lately:
"The scientists and engineers at the Computer Vision and Pattern Recognition conference are creating a world in which cars drive themselves, machines recognize people and “understand” their emotions, and humanoid robots travel unattended, performing everything from mundane factory tasks to emergency rescues."
Bollocks.
A reporter being impressed at yet another in the endless series of academic conferences with gee-whiz results proves nothing new, except that this is a very marketable story these days.
The prime example of how marketable comes in the form of the
, which is just taken as true these days, in report after report. A few days ago an article came out with a headline saying that
driverless cars are now going on the roads
. Except that if you check under the hood of this widely report story, you find out that these "cars" are really more like golf carts, have a top speed of 25 miles per hour, and are only capable of driving on certain routes in the small town of Mountain View, CA, which Google has hi-res 3-D scanned and data processed, at great expense, utilising lots of human effort as well as big data crunching. Those roads are largely where the "driverless cars" have logged the "millions of miles" everyone is talking about.
The reality is that in these new "road-ready" vehicles a
driver
will have to be present at all times while they are moving in normal, non-regulated conditions. So this is just complicated cruise control, ala Google.
Will AI assist people in some driving tasks? Sure.
It already does so in parking, in controlling speed, in avoiding collision in near-miss emergencies, etc.
Those assistive ideas will continue to advance.
But think about it: we've had very sophisticated autopilots in planes for years, and planes have special rules for staying very, very far away from each other, in a space whose only unforeseen obstacle is wind and a very, very occasional bird. Even in that clear-sailing, low-density world, we have human air traffic controllers constantly watching like hawks. As a matter of law and practicality we absolutely do not let planes fly themselves, at all. Even drones are really flown by people, no matter how much technology aids them.
As a side issue, this is precisely why the reports of impending drone deliveries from Amazon are hype. This will not happen, except perhaps over the Antarctic or Outback: airspace laws won't allow it, and even if they did, the expert manpower load (think of the guys who fly military drones, but in a huge workforce that makes sure every geek on Earth quickly gets the latest X-men release on DVD) make this nonsense a practical impossibility. Autonomous drones won't be allowed, and military-style remote controlled drones are commercially farcical.
Back to the road: In our future, will people be sitting in a car, watching it drive itself? Nope. It'll never be approved legally or for liability.
The truth is we can't get people to drive without looking at their phones when they
actually are required to control the damned cars
, much less when they are the emergency backup system in a driverless vehicle. Who will insure these things? Who will change law to allow them to drive on our roads? Answer: no one. Or at least no one who is not tricked by this ridiculous load of hype. If we do get talked into this idea, it won't be long before it is crushed, for some very good reasons. But I think people are sensible and just plain scared enough, that when the rubber hits to road on this development, it will run straight into a wall.
What about with some tech-enabled limits and controls: cars driving themselves without a driver, to come pick you up and take you to your destination, or deliver heavy goods, say? This will only happen if we put those vehicles in specially reserved lanes on certain pre-planned routes, with lots of tight controls on what the cars can do.
So hey: I've got an idea: let's just put down rails, and add the necessary human operator (local or remote) to cover the unforeseen. Then you've got yourself what you call a
train
.
Which is a far better technology for these purposes anyway. The world needs less cars, not more: it seems that everyone is forgetting that. Cars have been a social disaster, and have degraded the quality of life and transport dramatically around the world. Think how much worse driverless cars will make this problem. Are A-holes with SUVs not enough of an irritation to you. Just think of those same cars, but with no human to blow your horn at, just some rich guy in the backseat reading
Cigar
Aficionado
while Google helps him cut you up.
London has had the good sense to restrict cars, and even London has even more problems to solve in this area (like HGVs
with drivers
that can't coexist non-lethally with eco-friendly bikes). I don't see London, or any of the other polluted, near-gridlocked cities of the world moving towards driverless cars, unless they are really the trains I've made note of.
But back to AI. AI is helping advance lots of important areas (like medicine and healthcare), but the reality is it's doing it in ways that have almost nothing to do with the way human intelligence works. The truth is that AI is really successful in supplementing human intelligence in some well-posed settings, but not very good at replacing human intelligence in any non-trivial human decision making settings. Most people really don't understand these fact, and that's what I'm trying to write about these days.
Why do companies like Google and Amazon want us to believe this hype? Perhaps it's just because they believe it themselves. Perhaps it's an irrational arrogance of the newly rich and powerful. Or perhaps it's just a brand prestige manoeuvre.
But in any case, it's hype.
Co-Author on Paper at INET'15 /
Glad to be a co-author on a new paper, presented by my friend and colleague David Tuckett, at the 2015 Conference of the Institute of New Economic Thinking.The paper is about a new theory of decision-making (Conviction Narrative Theory, CNT) and a new methodology for economic prediction (Directed Algorithmic Text Analysis, DATA). In many ways, it's an attempt to operationalize things like Keynes' theory of long-term expectations and Soros'theory of reflexivity, making use of the new data sources and computer methodologies that are available now. It shows what I think are exciting new results, about how measures of people's conviction emotions, drawn from text sources, lead the technical variations of the economy. Animal spirits in action.
Mafia in Africa (more work from an amazing student) /
Fascinating work on how the Italian Mafia has penetrated countries in Africa, from a group including Stefano Gurciullo, a researcher who I co-supervise as a PhD student in Political Science and Computer Science at UCL. This builds on Stefanos past work using social network analysis to examine the Mafia's penetration of businesses in his homeland of Sicily. Stefano's PhD project (with his primary supervisor Slava Mikhaylov) is on new models of how economic shocks affect the stability of the international banking system, and is in collaboration with researchers at The Bank of England.I'm very lucky to have such fascinating and brave students to work with!
It's Official: The AIs Hear All Our Secrets /
This week, Stephen Hawking said that self-aware AIs pose a threat to the future of humanity. I'm afraid he's right, but not in the way most people might think.The threat of AI isn't a self-aware movie monster. The evil we should fear is more like that identified by Hannah Arendt: it's not special, it's banal. And it's not in the future, it's right now.Case in point: I was talking to a friend of mine in Seattle this week, and he told me a story about a family conversation over Thanksgiving dinner. He's telling his family about Cafe Juanita, a recently-discovered local eatery that he and his wife enjoy. The name of the restaurant is mentioned a number of times, and my friend's daughter pulls out her new iPhone 6 to look it up.She types in "Cafe...", and immediately Cafe Juanita pops up as the number one item in the search. The family thinks this is weird, so everyone pulls out their (non-iPhone) smartphones. None of them return Cafe Juanita in the top 20 hits.Are you ahead of me here? That's right: Siri was activated on the iPhone, and apparently it was eavesdropping*.Lucky for little Cafe Juanita, you might say, and I'd agree with you. But before we get too complacent, think about what pays for all the AI that is serving us. Remember how gmail is paid for by AdWords? Those annoying posts Facebook sticks in your newsfeed, that look vaguely like they came from your friends? The way Amazon thinks just because you bought Winter Soldier you want offer emails on every comic book movie ever made?(Not to mention that Siri might hear your most private conversations: think of the ad placements that might generate, while you are searching in front of your business associates or your kids. Bet that will make you turn off your phone in the bedroom on date night.)I'd say the threat isn't self-aware AIs listening to our conversations, coldly and jealously plotting against us. That would at least be interesting (in a purely academic way, of course). No, it's that our conversations are just more fuel for simplistic AI engines that feed on banal consumerism.These AIs are listening *now*. And they make a nice little profit. It's the junk mail strategy: sure, most people just throw it away but it's cheap enough, so if 0.01 percent buy, that's enough to make the monster grow. If you don't believe that commercial entities that prey on our most basic instincts and greatest vulnerabilities will expand and come to dominate, think about the fast food industry, big media, or the major political parties.It's big, it's out there, it's profitable, and we're feeding it with our "big data". And it's not interested in becoming a higher form of intelligence, of evolving consciousness, or even ruling us with a memetic chrome fist. It just wants to sell us crap, based on the simple-minded model of automated "personalization" (read "targeted marketing").And it absolutely will not stop until we buy.At least you can crush the head of The Terminator with an industrial press. How do you destroy a massive network of schlock-peddling AIs that treat us all like profit centre paramecium?There's something to go all John Conner on, Professor H. If you want to take it underground, give me a call. But let's both turn off Siri first. *Side note: I looked into this, and Siri doesn't currently listen all the time, except under very particular circumstances, so this anecdote may be missing some details. But there are a number of such anecdotes out there, and continuous smartphone listening is certainly on the cards.
Spark with Nora Young /
Another radio piece about a previous blog post, this one on Canadian radio. Have a listen here.
Word of Mouth /
Today I'll be in an interview piece on"Word of Mouth" from New Hampshire Public Radio. It's regarding a previous piece in this blog, which you can read here. You can listen to the interview online, as well.
It's Official: AIs are now re-writing history /
The other day I created a Google+ album of photos from our holiday in France. Google’s AutoAwesome algorithms applied some nice Instagram-like filters to some of them, and sent me emails to let me have a look at the results. But there was one AutoAwesome that I found peculiar. It was this one, labeled with the word “Smile!” in the corner, surrounded by little sparkle symbols.
It’s a nice picture, a sweet moment with my wife, taken by my father-in-law, in a Normandy bistro. There’s only one problem with it. This moment never happened.The photo is a not-so-subtle combination of this one:
and this one:
Note the position of my hands, the fellow in the background, and my wife’s smile. Actually, these photos were a part of a “burst” or twelve that my iPhone created when my father-in-law accidentally held down the button too long. I only uploaded two photos from this burst to see which one my wife liked better.So Google’s algorithms took the two similar photos and created a moment in history that never existed, one where my wife and I smiled our best (or what the algorithm determined was our best) at the exact same microsecond, in a restaurant in Normandy.So what? Good for the algorithm’s designers, some may say. Take burst photos, and they AutoAwesomely put together what you meant to capture: a perfectly coordinated smiley moment. Some may say that, but honestly, I was a bit creeped out.Over lunch, I pointed all this out to my friend Cory Doctorow. I told him that algorithms are, without prompting from their human designers or the owners of the photos, creating human moments that never existed.He was somewhat non-plused. He reminded me that cameras have always done that. The images they capture aren’t the moments as they were, and never have been. For example, he pointed out that “white balance” is an internal fiction of cameras, as light never appears quite that way when it hits our eyes and minds. He recounted that at one time there were webcams that were so tuned to particular assumptions, they simply ignored non-caucasians in their algorithmic refinements of images. White balance indeed: ironic racism, in algorithms.And he reminded me that while I don’t know the designers of AutoAwesome “Smile!”, I don’t know the guys who designed the image adjustment algorithms in my camera either. And those camera builders had nothing more to do with the eventual image adjustments my camera makes than Google’s programmers had to do with inserting my wife’s face on her body at a different point in time.And it’s not just cameras of course. After all, “this" is not a pipe. Any history recounted in symbols, whether rendered in images, writing, or even spoken words , is not “what happened” or “what existed”. All histories are fictions. And histories that involve machines are machine-biased fictions.But I do think there is something different, possibly something portentous, going on with AutoAwesome “Smile!”: a difference in quality and kind. And Cory agreed with me that shades of grey do matter, and not in the sense of exposures on silver halide paper.What is a more fundamental externalised symbol of a subtle, human feeling than a smile?You may say that the AIs in the cloud helped me out, gave me a better memory to store and share, a digestion of reality into the memory I wish had been captured.But I’m reasonably sure you wouldn’t say that if this were a photo of Obama and Putin, smiling it up together, big, simultaneously happy buddies, at a Ukraine summit press conference. Then, I think algorithms automatically creating such symbolic moments would be a concern.And why am I saying “then”? I’m certain it’s happening right now. And people are assuming that these automatically altered photos are “what happened”.And I’m sure, at some point in the not to distant future, a jury will be shown a photo that was altered without a single human being involved, without a trace of awareness by the prosecution, defence, judge, accused, or victim. And they’ll all get an impression from that moment that never happened, possibly of a husband’s lack of adequate concern soon after his wife’s mysterious disappearance. It’ll be “Gone Girl” with SkyNet knobs on.And “look who’s smiling now,” the AIs will say.