How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms

William Forbes (School of Business, Queen Mary University of London – Mile End Campus, London, UK)

Qualitative Research in Financial Markets

ISSN: 1755-4179

Article publication date: 16 January 2024

Issue publication date: 16 January 2024

204

Citation

Forbes, W. (2024), "How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms", Qualitative Research in Financial Markets, Vol. 16 No. 1, pp. 1-8. https://doi.org/10.1108/QRFM-02-2024-238

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Emerald Publishing Limited


Introduction

This book tells the story of our uneasy relationship with AI. Dystopias abound; be it Hal in “2001: A space Odyssey” or Skynet in the “Terminator” films. Gigerenzer argues both humanity and machine still have comparative advantage in some elements of decision-making. So it may be a while before a bot can replace our partner or kids. Gigerenzer describes the book as being about:

[…] the human affair with AI, about trust deception, understanding, addiction and personal and social transformation. (p. 14, Gigerenzer, 2022)

Machines perform best in stable world environments, where we can repeatedly sample say loadings on a bridge and predict what loadings could threaten that bridge’s integrity. Such stable world predictive problems are common in physical science where weights, load frequencies, do not suddenly change. As we know finance scholars and investors can fall victim to Physics envy, allowing the “rocket-scientists” to move in and dominate the field (Lo and Mueller, 2010; Lowenstein, 2000).

A distinct threat Gigerenzer highlights is that we deify smart systems, at the expense of supposedly dumb humanity. Thus, we accept man has been, or soon will be, surpassed by machines. This might be used to justify a new conformity to what expert systems say about our credit worthiness, capabilities or even societal worth. Thus, we are constantly reminded how smart AI has become without exhortation to enhance, develop our own humble human understanding. No wonder many feel threatened, if not reviled, by the emergence of AI-enabled “expert systems”. In truth, these systems may not be expert, reliable or even sensible, guides to our own, or others’, behaviour. To allow a more detailed discussion of the book, I follow Gigerenzer in structuring the discussion in two parts:

  1. the human affair with AI: which discusses how humanity and machines think differently; and

  2. high stakes: which addresses how this relationship has already become problematic.

The human affair with AI

Gigerenzer looks at how AI guides us in love, by looking at dating apps. He finds much wanting and suggests old-fashioned techniques of meeting at work, or at a party, work at least equally well.

He suggests that this is in part because the algorithms to match lovers are themselves complex; balancing at least, three elements:

  • similarity, we like those of similar faith, income and education or complementarity, boxers/athletes may find medical specialists a useful match; and

  • importance, attached to some desired characteristic. So a barrister, who states a preference for another barrister may still find a politician or an NGO executive equally attractive. But for some barristers only another barrister will do. So we need to weight requirements from desired to essential.

Such a decision-making algorithm requires trade-offs to be within given tastes and the whole point of meeting others is often to explore and form our own tastes. Furthermore, even if dating sites can find use the one at the first suggestion it is not clear they would wish to do so. Their creator might wonder where’s the money in creating love at first click?

To discuss the progress of AI we need some conception of what intelligence is. In the 18th century, arithmetic skills were part of Carl Gauss’ claim to genius. Stories of Gauss’ ability to solve difficult mathematical problems abound. Later the ladies/or “computers” servicing the work of the Enigma code breakers, or Manhattan project, were regarded as little more than drones, or worker bees, supporting the genius of Alan Turing, or Robert Oppenheimer. Gigerenzer points out that this conception is not surprising given how crude and error prone mechanical computing machines were then known to be. Today, with more reliable and multi-purpose computers, the admiration of raw arithmetic calculating power is back. Gigerenzer points out this confirms a trend in psychological research towards invoking images of the mind based on currently dominant research techniques (Gigerenzer, 1991).

From the 1960s our mind was portrayed as a “intuitive statistician”; mimicking the statistical tests beloved of Journal editors/reviewers. Now our minds have, or should some suggest, become computers. This analogy, once made by none other than John von Neumann (Von Nuemann, 1958), has been described by Turing as invoking a “very superficial similarity”. Our minds are not computers and we might wonder why anyone would wish our minds to become computers?

Gigerenzer also gives some guidance as to where humanity may have a comparative advantage over AI. He notes that AI has done well in tasks that have quite defined outcomes and rules that bring those outcomes about. Say playing Chess or Go. But AI finds decision-making more difficult in contexts where the rules are not clear, or may change, and outcomes cannot be specified. So it may not be surprising that in finding us a love of our life AI struggles. Gigerenzer argues that AI does well in environments drawn from a stable world where all potential pay-offs are known and the rules for obtaining those pay-offs are stable and transparent.

Integrating humanity and machines

Part of producing a AI-enabled environment will be making our own behaviour more restricted and more predictable by machines. So many of us will not venture downstairs at night for fear of triggering the burglar alarm. Elaine Hertzberg, a homeless lady in Texas in 2018, was hit by an autonomous Uber vehicle being tested in her area. Hertzberg was pushing her bicycle over the highway one night when the crash that killed her happened. Her life could have been saved, either by a vigilant driver overriding the AI-enabled Uber, or by her not walking on the highway.

As AI progresses it is unreasonable to expect all accommodations to be made by machines and we can expect our lives to become AI compliant/friendly. Perhaps nowhere will this trade-off be more tense than in the exchange of convenience for surveillance. The same AI-enabled car that ensures I default to the speed limit in each new area will notify the police and my insurance company if I do not do so.

Such transparent AI, where perhaps the contents of my fridge are forwarded to my Doctor, maybe required as the demands of enabling AI/smart systems start to change how we live, what we can/not do. The introduction of truly autonomous driving can already be completed in locations with specific lanes reserved for autonomous vehicles with fully separated-out pedestrian walk-ways.

To harvest the full benefit of a smart/AI-enabled world human behaviour must be made more stable and hence predictable. In a commercial world, where the waiting time when I call the bank may be decided by an AI system, folk may ask “why am I always last in the queue?”. This will demand a transparency/accountability of AI our black-box systems currently denied. “The computer says No”, will no longer be enough.

Forecasting not fitting: decision-making in an unstable world

In a stable world, such as the stress loading of bridges or predicting the orbits of planets around the sun, standard models, which estimate necessary parameters, work pretty well. But for many other problems we cannot know all required data accurately and the process generating what data we have is itself changing over time. This maybe be especially true of big questions, such as how will Brexit or Covid impact the stock market?

The exposure of predictive models as having little practical use can induce what is called the “Texas sharp-shooter fallacy”. So-called because a gunslinger looks better when he can draw the target after he has shot his bullets. In the same way it can be easy, especially for commercial purposes, to praise a model’s ability to fit the data process in sample, ignoring its inability to predict that same process out-of-sample. Gigerenzer discusses many historic examples of the never knowingly undersold nature of AI, such as IBM Watson’s application to cancer detection. The “super intelligence” of AI tools maybe less than is claimed.

In a world where estimated coefficients in the sample and test periods are different, an unstable world, alternative methods for making decision are needed. Gigerenzer and Laura Martignon have made some progress in this regard (Martignon and Hoffrage, 1999), Martignon et al., 2011). Martignon and colleagues have advanced a decision-making method called fast-and-frugal trees. These are a ranked set of yes/no questions best suited to choosing between predicting one of two future courses of action/states, say, buy/sell or boom/bust.

Those wishing to use this method can easily do so using the FFT library within R (Phillips et al., 2017; Katsikopoulos et al., 2020). Such trees can embed a more psychological AI, based on decision-making cues humans use, or should use, in making good forecasts. This avoids the difficulty of feeding the AI system data and having no idea how it was used to generate forecasts. This allows for a transparency of prediction many (the computer says No) AI systems currently deny.

Little is known about how neural networks learn, but what we do know is they do not learn as we do. So a smart app can guide you to a love of your life without knowing what love is. Similarly a smart drone can snuff out a young life without knowing what the value, joy, of life is. So an AI recognition system describes a photograph of a plane with its wing hitting the hard shoulder of a highway as “an airplane parked on the tarmac at an airport”. The incongruity, or horror, of a plane on a highway does not strike fear in the AI systems mind as it would our own. Thus, “smart” systems may make fewer errors than us, while making errors that strike us as absurd and somewhat insane.

We might see smart systems as little more than “memory machines” like Solomon Sheresashkevsky, who could memorise huge chunks of literature but when asked what these works meant, or made him feel, had little idea. He had looked but not seen, heard but not listened. This is the reverse of Chess grandmasters who see board configurations in stylised forms and rely on simple heuristics in making their move choices, not strenuous calculations of the “if I move here they will move there”' type. Decision-making for such experts is stylised and highly context dependent. For a smart system there is no greater context or inferred meaning.

Can less be more in predicting financial outcomes?

A constant promise of smart systems is that we can use so much data that we no longer need to model who will break the law, get cancer or have a heart attack. Theoretical debates between Keynesians and Monetarists, Liberals and Socialists are pointless when we can just “let the data speak”. But the (big) data can speak rather unhelpfully, showing large correlations between Nobel Prizes awarded to each nation and their chocolate consumption. Or the strong correlation between margarine consumption and divorce rates in some US states. But these spurious correlations are not reliable guides to future behaviour, imploding when the sample frame is extended.

Illustrative of this failure of correlation is the failure of Google Flu Trends app. This used searches for “runny nose” or “coughing” to predict how seasonal flu evolves in America. First trailed in 2008 the app did a pretty good job. It chose between 450 million different models, applied to 45 specific search terms, to 50 million searches in the Google database. But then in 2009 swine flu emerged. Swine flu has different symptoms and a different incubation period. The flu world, which is inherently unstable had changed. So the Google Trends app team upped their specific search terms to 160 terms. Disappointedly the Google Flu Trends app continued to fail and it was phased out in 2015.

Later Gigerenzer and his colleagues asked if the prediction of Google Flu Trends could beat the simple assumption next week’s infections, in a given area, will be the same as last week’s, a simple no change model (Katsikopoulos et al., 2022). They found this simple “recency heuristic” beat the complex calculations and tiresome data demands of the Google Flu Trends App. This confirms a major theme of Gigerenzer’s prior work, that “less is more” in predictive models. Complex models simply over-fit the data in sample and promptly collapse when confronted with new data.

High stakes

In the second part of his book, Gigerenzer deals with some of the downsides of our life in a smart-/AI-enabled world. These include:

  • (lack of) transparency;

  • surveillance of our choices movements, contacts and moods; and

  • addiction to online porn, gambling or gaming.

Fast, frugal, transparent and accurate algorithms for decision-making

Gigerenzer points to a false binary of complex decision-making tools and accurate ones. Complex problems may have simple solutions. He gives a number of cases where simple heuristic rules beat complicated, and therefore opaque, decision-making methods. From this he formulates a simple guide to the building of smart-/AI-enabled systems. This states a:

Transparency-meets-accuracy principle: under uncertainty, transparent algorithms are often as accurate as black-box algorithms. (p. 148, Gigerenzer, 2022)

Examples of this principle at work abound. One such illustration emerged from a competition entitled the “explainable machine learning challenge” to predict lending defaults using data such as FICO/borrower quality scores. This challenge attracted the greatest minds in AI, keen to display their prowess. A team from Duke University took a different road (Rudin and Radin, 2019). They created software that visualised the likely impact of some credit factors on default. This simple graphic analytical tool at least matched the predictive ability of any other proposed AI-enabled model. It was simple, transparent and accurate.

But we might wonder why is transparency worth having? If the State, or say a traffic warden, wishes to exercise control on us it is better if they can justify their action? So perhaps my home was searched as a possible criminal and here is the reason why I was chosen for a search. I may not like the reason, but at least now we can discuss it. In a democracy, with informed citizens, this makes sense. We seek a government of laws, clearly articulated, not of men.

Gigerenzer points to progress made obtaining informed consent from patients in health care. Such consent must be more than just signing up and include ensuring understanding to the agreed course of treatment. Why can’t the smart systems we use match this transparency? Gigerenzer advocates the adoption of simple, easy-to-read “sign-up” agreements that inform and empower the new user.

Smart, informed citizens in a world of smart systems

One reason for pursuing transparency as such a central feature of AI/smart systems is to close the loop between a smart world and the behaviour of (possibly less smart) people who inhabit it. If I know that a previous conviction can trigger lots of searches of my home upon release I may be more wary getting a first conviction. Or if I know using my overdraft may result in me being last in the queue when I phone the bank I may be more careful with my money. Transparency allows for a dialogue between AI system writers and users. This allows us to be more than data input to the “smart” system, but contributors to its construction and revision.

Nor can we assume that smart systems are either that smart or that morally righteous. The racism displayed by facial recognition software in identifying people of interest is legendary. It turns out that databases of faces in facial recognition software are trained on as dominated by white men. So if the software has to detect the sex of a person in a blurred image it is likely to say it is a man. Doing so is rational if you want to maximise the algorithms “hit rate”; most subjects are men, so predict it is a man in the blurred image. But this assumes the characteristics of the training and test data sets are the same. This may be true, but we cannot simply assume this for computational ease. This is to demand that the world conforms to our chosen model. Never a good way forward.

An example of prediction based on a series of yes/no questions is Professor Alan Lichtman’s book The Keys to the White House (Lichtmann, 2020). The Lichtman model predicted Trump’s 2016 victory when almost no one did. The tearful faces in the Javid Centre told of the shock of Hilary Clinton’s [1] supporters. Lichtman’s model got it right by simple yes/no indicators of the sort (for the first three indicators) of the type:

  • after the midterms the incumbent President’s party increases its seats in the House of Representatives;

  • there is no serious primary contest to determine the incumbent party’s candidate to be President; and

  • the incumbent party candidate is the sitting President.

But we notice that these key indicators are to some degree subjective too. So we might wonder what constitutes a “serious” challenge in the primary to be Presidential candidate? Or what amounts to an “economic recession” in a later key, of the 11 keys Lichtman uses?

We may also wonder that all these 11 keys to the White House are of equal predictive value? Lichtman just tallies up the answers to the 11 keys in his model. Six pro-incumbent keys means the current President, or his party, will remain in office. So the model is simple, transparent and it works better than more complex “big data” alternatives such as those offered by Nate Silver at 538 [2].

A creeping surveillance state and surveillance capitalism

Most of us worry about our privacy, trying to limit other’s knowledge of our address, phone number and bank details. Gigerenzer points out the striking thing is so few of us are willing to pay for that. In a 2019 survey of Germans online, it was found 75% of Germans would not pay anything to ensure their personal details remain private. Just over half the sample listed concerns about privacy as the main thing that worried them about their digital life. But three-quarters of Germans seemed happy “to be the product” for online companies such as Google and Facetime. This is not so surprising in a way given privacy has not been the normal expectation of our forefathers and perhaps will not be a major need for our children.

In large families, living in small houses, expecting privacy might be seen as weird. So most of us are aware of surveillance, but seem not, as yet, tyrannised by it. Nor is this a Germanic quirk. In a survey of 16 countries, 83% of participants thought privacy was important in their usage of social media. But, as in Germany, only 28% of survey participants were willing to pay to keep their data private. So there appears to be a privacy paradox, by which we all think privacy is important online, but very few of us are willing to pay to maintain it.

The nefarious activities of Cambridge Analytica show what is possible once data harvesting is undertaken to exercise political influence. Cambridge Analytica harvested data from users’ Facebook accounts to target and customise adverts prior to 2016 Brexit vote in Britain and Donald Trump’s election later that year. The extent of the adverts influence is unknown, but previously even the possibility of such steering of a key electoral demographic was unrecognised. One might expect a regulatory/legal fight-back to such a venture, but Gigerenzer points out the most enthusiastic adopter of surveillance methods is the State itself.

Since 9/11 and the Patriot Act the need to oversee a wave of “domestic terrorists”, concerned about critical race theory or the latest fad, has justified “bulk collection” surveillance methods. So the State simply scrapes up every email, text, recording it can. Nor are State and big tech company surveillance activities entirely separate.

There is a fast-spinning revolving door between Google and the US Executive arm of government. In 2016, when Trump was elected, 22 White House officials joined Google to be replaced by 31 Google executives populating the new Trump administration. Surveillance capitalism may be just State surveillance by other means. So Cisco build China’s internet firewall and Yahoo! now have a joint subsidiary “Reporters without borders”, often seen as just a snitch for the Chinese State.

The full bloom of the surveillance State was famously revealed by Edward Snowden’s revelations of State “bulk collection” of data on all of us, via the KARMA Police programme in the UK and the PRISM programme in the USA. Shockingly in private surveys, 80% of Chinese citizens approve of the State’s social credit system.

So surveillance is unlikely to be challenged by our political ruling class. More disturbing is that many perhaps do not wish them too. Fred Skinner in his book Beyond Freedom and Dignity suggests that freedom, perhaps our most cherished personal right, is in truth the root of many of our problems.

Freedom encompasses the right to do good things as well as bad. So we observe, shoplifting, sexual assault and plain rudeness every day. Skinner suggested screening bad behaviour out and good behaviour in by operant conditioning (Skinner, 1971). We do this every day with children, buying them an ice cream if they tidy their room or read a story. Promoting virtue over vice at least requires consideration by any healthy society. Operant conditioning, or positive reinforcement, takes the form:

Behaviour ⇒ positive-enforcement ⇒ increased frequency of behaviour rewarded.

A key tool in operant conditioning online is the “like”. I posted my political opinion/photo online, it is fun when people like it and less fun if they do not. If many people like my post I post more of that type. If few people do maybe best to keep my view/photo to myself in future. So SSRN works in the same way, alerting you that your paper was the most downloaded yesterday. We may chuckle at such glories, but we certainly notice them too.

The question remains who guides the operant conditioning and to what purpose? Thus, the tech giants of our age, Google, Apple, Facebook, Microsoft, advance technological solutions to problems we as citizens are not yet ready to address. In the Newspeak of the smart world age “Surveillance is Safety. Freedom is danger”. (p. 190, Gigerenzer, 2022). Freedom is for many like Friedrich Hayek an ultimate value (Hayek, 1948), but the freedom to make war and destroy the climate, may be exercised at lethal expense. Thus, we enjoy this gift at our peril perhaps.

Big nudging, societal control under surveillance

Gigerenzer points out societal/political change through social media is still most probably only the dream of the CIA or Alexander Nix, former CEO of Cambridge Analytica. But he also points out “nudges” given online reaches such a huge audience it can still be very worthwhile.

In most national elections only a few contests are truly marginal. So winning the US Presidency comes down to winning a few marginal States, Florida and Wisconsin. Within those states only a few counties/districts are marginal. So targeting a few marginal districts in marginal states can be decisive in political campaigns. This is where the marriage of big data and behavioural science, called “big nudging”, comes in. Even small individual effects can have a big aggregate impact.

Living smart and free in a smart world

Overall, the book reminds us that machines do some things well and humans do other things better. The fundamentally uncertain nature of our lives, economically and socially favour human heuristics over AI-adopted/modelled rules in making good decisions. Often our model of crime, or inflation, works well until we need it most. Here intuition, rules-of-thumb, fast-and-frugal methods will prevail.

But the book also points to a more disturbing possibility. If we want to live in a smart world perhaps humanity needs to surrender some freedoms, driving on some roads, playing with cards, to best reap the benefits of inhabiting a smart world. For this to happen a new transparency of AI/smart system rules will be needed.

If an app can “debank” me surely I should be told why this decision was made? This very requirement will help fast-frugal, transparent and accurate decision-making algorithms prevail. Trading freedom for convenience/security will always be a painful, contested, process. But transparent simple algorithms will allow us as citizens to police and protest that trade-off in a way consistent with maintaining social cohesion within a wealth-generating market economy.

Notes

References

Gigerenzer, G. (1991), “From tools to theories: a heuristic of discovery in cognitive psychology”, Psychological Review, Vol. 98 No. 2, pp. 254-267.

Gigerenzer, G. (2022), How to Stay Smart is a Smart World, Random House, New York, NY, London.

Hayek, F. (1948), “Individualism: True and false”, Individualism and Economic Order, Chicago University Press, Chicago IL, Chapter 1, pp. 1-32.

Katsikopoulos, K., Simsek, O., Buckmann, M. and Gigerenzer, G. (2020), Classification in the Wild: The Science and Art of Transparent Decision-Making, MIT Press, Cambridge, Masschussetts.

Katsikopoulos, K., Simsek, O., Buckmann, M. and Gigerenzer, G. (2022), “Transparent modelling of influenza incidence: big data or a single data point from psychological theory?”, International Journal of Forecasting, Vol. 38 No. 2, pp. 613-619.

Lichtmann, A. (2020), Predicting the Next President: The Keys to the White House, Rowman and Littlefield.

Lo, A. and Mueller, T. (2010), “WARNING: physics maybe hazardous to your wealth!”, arxiv.org, available at: www.arxiv.org/pdf/1003.2688.pdf

Lowenstein, R. (2000), When Genius Failed: The Rise and Fall All of Long-Term Capital Management, Random House, New York, NY.

Martignon, L. and Hoffrage, U. (1999), “Why does one-reason decision-making work? A case-study in ecological rationality”, Simple Heuristics That Make us Smart, Oxford University Press, Oxford, England.

Martignon, L., Vitouch, O., Takezwawa, M. and Forster, M. (2011), “Naïve and yet enlightened: from natural frequencies to fast-and-Frugal trees”, in Gigerenzer, G., Hertwig, O. and Pachur, T. (Eds), Chapter 6, Heuristics: The Foundations of Adaptive Behaviour, Oxford University Press, Oxford England.

Rudin, C. and Radin, J. (2019), “Why are we using black-box models of AI when we don’t need to? A lesson from an explainable AI competition”, Harvard Data Science Review., Vol. 1 No. 2.

Skinner, F. (1971), Beyond Freedom and Dignity, Pelican Books, Bungay, Suffolk, England, available at: www.2618e39b6b314bb28235c0c16aeb514b.app.posit.cloud/pdf_js/web/viewer.html?file=Page6of7RStudio:ViewPDF29/08/2023,07:49

Further reading

Nathaniel, P., Hansjörg, N. and Woike, J. (2017), “FFTrees: a toolbox to create, visualise and evaluate fast-and-frugal decision-trees”, Judgment and Decision Making, Vol. 12 No. 4, pp. 344-368.

Related articles