In Tyler Cowen’s book Average is Over, he makes two related predictions about how machine intelligence may interact with human behavior and affect human success in the near future. First, there will be an increasing sophistication and use of methods of tracking and measuring human performance in a wide range of endeavors, including but not limited to one’s job performance. Already of course a prospective employer can check your credit score and look you up on social media. Once on the job, the capacity to constantly monitor and measure and quantifiy your true performance value will increase over time, and the results may not turn out very kindly for a lot of workers.
In addition to employment metrics, maybe this trend of data collection and quantification will encroach on other areas of life, marking us as good or not-so-good dating prospects perhaps, or giving universities many more ways to predict a prospective student’s academic performance, or giving our doctors a picture of how compliant or trustworthy we are as patients, and likewise giving us better information on the performance of our doctors, etc.
Cowen sees this ubiquitous measurement trend as unsavory and disquieting, but he sees no way to stop it. It may end up delivering many benefits, but people generally do not like being surveilled and weighed and measured in everything they do. (FYI, here is a great interview transcript with Cowen summarizing many themes of his book.)
Like Cowen, I think a lot of people find this sort of constant measuring uncomfortable. Maybe we will eventually recoil from this over-quantification, this empirical obsessiveness, which when extrapolated reduces all human pursuits to a sort of sabermetrics calculation. Though I wonder how we will react when the price of access to a good and prosperous life is acquiescing to this constant invigilation by our data-hungry metric overlords. Will this "entry fee" clash with other more basic human psychological motivations?
Dostoyevsky, in his Notes From Underground, warns that attempting to reduce human behavior to a set of algorithms or "tables" is bound to fail and backfire. He, like Cowen, would concede that such sophisticated quantification may indeed be able to reveal where our own best interests lie, and allowing others access to this data jackpot may be the best move for our professional and personal flourishing. But the human will is not primarily interested in following the course of rational best interest, nor in flourishing.
A man, whoever he is, always and everywhere likes to act as he chooses, and not at all according to the dictates of reason and self-interest; it is indeed possible, and sometimes positively imperative, to act directly contrary to one’s own best interests. One’s own free and unfettered volition, one’s own caprice, however wild, one’s own fancy, inflamed sometimes to the point of madness, that is the one best and greatest good.
Dostoyevsky writes that a man will always commit abominations counter to his interests, “just so that he can assert, as if it were absolutely essential, that people are still people and not piano-keys.”
More than that: if men really turned out to be piano-keys, and if it was proved to them by science and mathematics, even then they would not see reason, but on the contrary would deliberately do something out of sheer ingratitude in order, in fact, to have their own way.
We can all agree, Dostoevsky says, that two-and-two-make-four is an excellent thing; “but to give everything its due, two and two make five is also a very fine thing.”
In this future quantified world, rejecting the measurements of the machines, and refusing to be defined and played upon by these metrics, will be asserting 2+2=5 against our own revealed best interests, in order to assert our "own free and unfettered volition."
But Cowen has another prediction about the future ability of machines to reveal to us a best course of action, and it might lead to a very different conclusion.
As our personal devices collect and collate ever more individualized information from every facet of our lives, they may come to develop strong opinions on our behalf regarding dating advice, career advice, investment advice, leisure advice, and so on. Cowen thinks that a key predictor of success in the future will be one’s willingness to defer to the machines. But will this satisfy our demand for self-volition and recognition?
Because of their ability to synthesize an unlimited amount of data from disparate sources, and unfettered by emotion and fear and anxiety, computer recommendations for an individual’s decision-making could in fact be more creative, more nimble, and at least superficially appear to be against our best interests. If the computer is thinking with such advanced abstraction using far more information than humans can process, its behavior recommendations my seem full of caprice and even "inflamed to the point of madness." But in fact the computer will be merely asking us to override our human intuition, which is so often faulty and unreliable and beset by bias and superstition.
So this inverts the Dostoyevsky aphorism: It is human intuition that is often limited by staid 2+2=4 thinking, and the computers which may insist to us that 2+2=5 is the best course of action.
Cowen has an illustrative story about this: Imagine you are on a date in the future, and at a key point in the evening your phone or equivalent device starts buzzing at you, and it starts flashing, "kiss her now!" Depending on your intuition and mood at that moment, perhaps that recommendation strikes you as 2+2=5 thinking, essentially inconceivable and maybe anti-rational, and surely counter to your own current best interests. But the device has been listening to your conversation, measuring your heart rate, calibrating the vocal timbre of you and your date, and analyzing the entire literature and history of male-female romantic interaction, etc. Maybe you should listen. Cowen argues that those willing to heed their device in these sorts of circumstances may find better relationship success. And in many other areas of life, those willing to override their often-terrible human intuition and listen to the capricious machines may prosper more than those who stubbornly refuse.
But will this hectoring by our devices offend our pride and our demand for self-volition? Or will we see the device as an ally and in fact an extension of our own volition?
If the former, in order to continue to assert our own volition in the Dostoyevsky sense and prove we are not piano-keys, we will have to reject the computer’s innovative 2+2=5 thinking, and affirm the old boring predictable 2+2=4. In these cases, following our 2+2=4 intuition will become the revolutionary act, the radical departure from our own true best interests revealed through the computer’s peculiar genius. Dismissing something on your notification bar could become the height of human protest. How radical.
The base human desire for self-volition and recognition will not go away. But what counts as an assertion of human volition and individual recognition will get tricky. In the face of ubiquitous quantification, and beseiged by machine advice about our best self-interest, how people come to define what qualifies as a successful satisfaction of these base desires may affect the future trajectory of man-machine interaction in profound ways.