It's Official: The AIs Hear All Our Secrets by Rob Smith

This week, Stephen Hawking said that self-aware AIs pose a threat to the future of humanity. I'm afraid he's right, but not in the way most people might think.The threat of AI isn't a self-aware movie monster. The evil we should fear is more like that identified by Hannah Arendt: it's not special, it's banal. And it's not in the future, it's right now.Case in point: I was talking to a friend of mine in Seattle this week, and he told me a story about a family conversation over Thanksgiving dinner. He's telling his family about Cafe Juanita, a recently-discovered local eatery that he and his wife enjoy. The name of the restaurant is mentioned a number of times, and my friend's daughter pulls out her new iPhone 6 to look it up.She types in "Cafe...", and immediately Cafe Juanita pops up as the number one item in the search. The family thinks this is weird, so everyone pulls out their (non-iPhone) smartphones. None of them return Cafe Juanita in the top 20 hits.Are you ahead of me here? That's right: Siri was activated on the iPhone, and apparently it was eavesdropping*.Lucky for little Cafe Juanita, you might say, and I'd agree with you. But before we get too complacent, think about what pays for all the AI that is serving us. Remember how gmail is paid for by AdWords? Those annoying posts Facebook sticks in your newsfeed, that look vaguely like they came from your friends? The way Amazon thinks just because you bought Winter Soldier you want offer emails on every comic book movie ever made?(Not to mention that Siri might hear your most private conversations: think of the ad placements that might generate, while you are searching in front of your business associates or your kids. Bet that will make you turn off your phone in the bedroom on date night.)I'd say the threat isn't self-aware AIs listening to our conversations, coldly and jealously plotting against us. That would at least be interesting (in a purely academic way, of course). No, it's that our conversations are just more fuel for simplistic AI engines that feed on banal consumerism.These AIs are listening *now*. And they make a nice little profit. It's the junk mail strategy: sure, most people just throw it away but it's cheap enough, so if 0.01 percent buy,  that's enough to make the monster grow. If you don't believe that commercial entities that prey on our most basic instincts and greatest vulnerabilities will expand and come to dominate, think about the fast food industry, big media, or the major political parties.It's big, it's out there, it's profitable, and we're feeding it with our "big data". And it's not interested in becoming a higher form of intelligence, of evolving consciousness, or even ruling us with a memetic chrome fist. It just wants to sell us crap, based on the simple-minded model of automated "personalization" (read "targeted marketing").And it absolutely will not stop until we buy.At least you can crush the head of The Terminator with an industrial press. How do you destroy a massive network of schlock-peddling AIs that treat us all like profit centre paramecium?There's something to go all John Conner on, Professor H. If you want to take it underground, give me a call. But let's both turn off Siri first. *Side note: I looked into this, and Siri doesn't currently listen all the time, except under very particular circumstances, so this anecdote may be missing some details. But there are a number of such anecdotes out there, and continuous smartphone listening is certainly on the cards.

It's Official: AIs are now re-writing history by Rob Smith

wpid-IMG_2559-SMILE2.jpg

The other day I created a Google+ album of photos from our holiday in France. Google’s AutoAwesome algorithms applied some nice Instagram-like filters to some of them, and sent me emails to let me have a look at the results. But there was one AutoAwesome that I found peculiar. It was this one, labeled with the word “Smile!” in the corner, surrounded by little sparkle symbols.

It’s a nice picture, a sweet moment with my wife, taken by my father-in-law, in a Normandy bistro. There’s only one problem with it. This moment never happened.The photo is a not-so-subtle combination of this one:

wpid-IMG_25792.jpg

 and this one:

wpid-IMG_25592.jpg

 Note the position of my hands, the fellow in the background, and my wife’s smile. Actually, these photos were a part of a “burst” or twelve that my iPhone created when my father-in-law accidentally held down the button too long. I only uploaded two photos from this burst to see which one my wife liked better.So Google’s algorithms took the two similar photos and created a moment in history that never existed, one where my wife and I smiled our best (or what the algorithm determined was our best) at the exact same microsecond, in a restaurant in Normandy.So what? Good for the algorithm’s designers, some may say. Take burst photos, and they AutoAwesomely put together what you meant to capture: a perfectly coordinated smiley moment. Some may say that, but honestly, I was a bit creeped out.Over lunch, I pointed all this out to my friend Cory Doctorow. I told him that algorithms are, without prompting from their human designers or the owners of the photos, creating human moments that never existed.He was somewhat non-plused. He reminded me that cameras have always done that. The images they capture aren’t the moments as they were, and never have been. For example, he pointed out that “white balance” is an internal fiction of cameras, as light never appears quite that way when it hits our eyes and minds. He recounted that at one time there were webcams that were so tuned to particular assumptions, they simply ignored non-caucasians in their algorithmic refinements of images. White balance indeed: ironic racism, in algorithms.And he reminded me that while I don’t know the designers of AutoAwesome “Smile!”, I don’t know the guys who designed the image adjustment algorithms in my camera either. And those camera builders had nothing more to do with the eventual image adjustments my camera makes than Google’s programmers had to do with inserting my wife’s face on her body at a different point in time.And it’s not just cameras of course. After all, “this" is not a pipe. Any history recounted in symbols, whether rendered in images, writing, or even spoken words , is not “what happened” or “what existed”. All histories are fictions. And histories that involve machines are machine-biased fictions.But I do think there is something different, possibly something portentous, going on with AutoAwesome “Smile!”: a difference in quality and kind. And Cory agreed with me that shades of grey do matter, and not in the sense of exposures on silver halide paper.What is a more fundamental externalised symbol of a subtle, human feeling than a smile?You may say that the AIs in the cloud helped me out, gave me a better memory to store and share, a digestion of reality into the memory I wish had been captured.But I’m reasonably sure you wouldn’t say that if this were a photo of Obama and Putin, smiling it up together, big, simultaneously happy buddies, at a Ukraine summit press conference. Then, I think algorithms automatically creating such symbolic moments would be a concern.And why am I saying “then”? I’m certain it’s happening right now. And people are assuming that these automatically altered photos are “what happened”.And I’m sure, at some point in the not to distant future, a jury will be shown a photo that was altered without a single human being involved, without a trace of awareness by the prosecution, defence, judge, accused, or victim. And they’ll all get an impression from that moment that never happened, possibly of a husband’s lack of adequate concern soon after his wife’s mysterious disappearance. It’ll be “Gone Girl” with SkyNet knobs on.And “look who’s smiling now,” the AIs will say.

The Corridor of Uncertainty by Rob Smith

Andy Haldane, Chief Economist at the Bank of England made a nice mention of a paper by David Tuckett, Rickard Nyman, and myself in a speech entitled The Corridor of Uncertainty, saying:

To that end, the Bank is investing in improving its own data architecture and analytics. Perhaps a more timely reading of the economic and financial tea leaves can be found by scraping the web or by semantic search on social media sites? Recent research has suggested just that. These are the sorts of question the Bank’s rocket scientists can help us answer.

It's any interesting speech, and worth clicking through to read in its entirety. The paper cited is related to work Rickard did while an intern at the BoE last summer.

New Papers on Psychologically-Directed Big Data Analysis in Economics by Rob Smith

I wanted to announce that my work with The Centre for the Study of Decision-Making Uncertainty is beginning to yield even more interesting papers. We've got two papers coming up at the European Central Bank Workshop using Big Data for Forecasting and Statistics. One of these is in collaboration with David Gregory and Sujit Kapadia at the Bank of England, along with my ongoing collaborators David Tuckett, and our student Rickard Nyman, entitled "News and narratives in financial systems: exploiting big data for systemic risk assessment" (the paper is pending release approval, but the presentation is available here). It's the outcome of a Rickard's internship at The Bank. The other is with David, Rickard, and our other team member, Paul Ormerod. It's entitled "Big Data and Economic Forecasting: A Top-Down Approach Using Directed Algorithmic Text Analysis" (click for an offprint).Paul, David, Rickard and I also have a paper entitled "Bringing Social-Psychological Variables into Economic Modeling: Uncertainty, Animal Spirits and the Recovery from the Great Recession" at the International Economic Associations 17th World Conference at the Dead Sea, in Jordon