Co.Create

In A Predictive, Algorithmic World, Are Machines Better Than You?

The answer, according to Adaptly’s Charlie Neer, is probably yes. But that’s not a bad thing, if humans rethink their role in relation to technology.

At least once a week I speak with a media director who spends millions on search through Google and puts his or her employee’s pensions into algorithmically controlled funds. However, when asked about the social media buying approach, the executive tells me that he or she has (and wants) humans controlling optimization and budget allocation. This statement always dumbfounds me. Are we not living in the age of the autopilot, automated-financial trading, real-time bidding (RTB) and the Bowl Championship Series (BCS)?

We all know the history behind the recent economic crisis. I wonder though, if most people know that trading algorithms predicted the fall from grace long before the actual traders? Check it out for yourself. When you consider this, it makes perfect sense. Given the amount of variables that abound and the speed at which they are changing, humans simply can’t keep up.

In a more industry-specific example, look at the evolution of display ad serving optimization. As the number of variables increased, so did the speed of technology adoption. We went from spreadsheets to dynamic cost per thousand (dCPM) to RTB in less than five years.

Let’s think beyond today. What happens when you break through the finite set of inferred third-party targeting attributes in display advertising to a point where one has the ability to adjust bids based on plethora of accurate first-party data curated by the end user? Examples include your entire Facebook profile or your entire history on the web through OpenGraph, which are your likes, dislikes, desires, music taste, last time you ate, exercise habits, sexual orientation and more. The truth is that hiring enough people to monitor, adjust, and optimize this amount of information is economically unsustainable if not logistically impossible—a machine must be leveraged.

So what will our role be if not that of decision maker? We need to assume the role of the architect, the regulator, and the artist. Complex technologies are performing complicated and extensive lists of decisions that given infinite time and resource humans could perform, but in our advertising world time is not infinite.

Will machines ever be able to create a funny Old Spice ad, act as crossing guards at schools, or as medical professional be considered to have a good bedside manner? Potentially, however, this seems less feasible because it involves understanding human emotion and occasionally acting irrationally. To me this explains why semantic analysis is notoriously one of the hardest problems to solve in the world of advertising. No matter how much predictive technologies evolve there will always be human involvement. The important thing to accept is the type of involvement is shifting to one of creator, and the skills needed to act in this capacity belong to very few people. The world is not going to turn into a scene from Terminator or The Matrix, as we will be the one controlling the machines; however, we should be raising our children to understand, manipulate, and evolve programming languages because currently only a very small percentage of people in the world can and this hinders our ability to innovate as a society.

Lastly, I would argue that the future is bright across all industries—including advertising—because of technologies, not in spite of them. A few examples that I find fascinating: what are the implications of a Twitter sentiment tool being integrated with the likes of a Goldman Sachs trading algorithm? What are the ethics when your insurance provider has access to your OpenGraph actions? These are the types of conversations we should be having. We should not be wading in the mud with arguments about whether machine learning is a more efficient way of performing our industries’ core tasks.

The resistance to predictive services is like fighting against the ocean, eventually you tire and drown in abundance regardless of your strength and fortitude. The way to survive is to find a proper vessel, learn its merits, and understand the environment for which it is designed. In short, yes, machines are better equipped to make (business) decisions than humans; however, humans still have a vital role to play. Not just the obvious role of designer but more importantly that of architect and/or regulator. How can we teach machines to conceptualize other machines? To solve problems completely unrelated to their own purpose? Are those things we even want?

We need to shift the conversation away from whether algorithms are adding value and focus on the more important question, which is—are we?

Charlie Neer is European Business Development Manager at Adaptly.

[Image: Flickr users Deni Simic, Jared Tarbell, Marc C, and Spencer Tweedy]

Add New Comment

4 Comments

  • GeoffreyMortonHaworth

    Yes.
    There is no doubt that machines can do certain things better than we
    can. The question then becomes "where does that leave me?" Which former
    fighter-pilot turned professor "Missy" Cummings raises rather neatly in
    this MIT video: http://video.mit.edu/watch/mis...

  • Brian Dalessandro

    Excellent commentary. I think the data/machine learning community needs to better articulate the role machines should play in our society and economy. Machines will always be better than humans at consistently executing repetitive tasks at an enormous scale. Using them as such (like in any high volume trading capacity or recommendation system) is not a threat to anyone's job. I am not sure the non-data practitioner realizes how much the machine depends on the person programming and monitoring it. Without proper oversight (what the architects and regulators provide), machines can easily make many of the wrong decisions at scale. They are ultimately tools, and tools are most useful when the person wielding them has the experience and knowledge to do so effectively.