Skip to content

Fear of Brave New Worlds, or Uninspired Headline Writing?

Summer 2016 marked the 85th anniversary of novelist Aldous Huxley completing his manuscript for Brave New WorldThe widely read novel, a dystopia of happiness-led oppression (in contrast to the fear-controlled populace in Orwell’s 1984), anticipates global adoption of advances in science and technology such as subliminal learning and reproductive medicine. Published in 1932, the book is still a popular read, ranking fifth in Modern Library’s list of the 100 best English-language novels of the 20th century.  Unsurprisingly its title, along with Orwell’s, has also become a stock phrase in headlines, used to signal a new direction for advances in science and technology.

“Test-Tube Babies: The ‘Brave New World’ of Human Pregnancy Is Coming!”
The Evening Independent, July 22, 1978
Designer babies, grow a baby in a bottle and more.

“Brave New World: Will gene editing rewrite the future of medicine?”
Genome, n.d.
Engineering disease (and other things) out of humans with CRISPR.

“The Brave New World of Three-Parent I.V.F.”
The New York Times Magazine, June 27, 2014
Heritability, and who is the parent?

But there’s a twist—since Huxley presents these advances purely as societal methods for keeping people under strict control, should writers only use the words as a subtle suggestion of imminent doom?

Fear sells

And why not—bad news sells! A pessimistic slant draws in the reader, especially since wariness about subjects such as artificial intelligence (AI) translate into great plot lines, such as Agent Smith’s battle against Neo in The Matrix. Hollywood scriptwriters over the years have run and run with the theme of AI and machine learning allowing computers to overtake their human overlords, creating some truly memorable moments on screen.

…and it’s certainly difficult to shake off such technopessimism when it’s reinforced by Stephen Hawking and Elon Musk!

Technopessimism justified?

Nevertheless, does scientific advancement truly deserve the “brave new world” tag? Is it all doom and gloom? Looking at AI and science in particular, there are benefits, and wariness shouldn’t be automatic; AI already helps scientists with their research.

At IBM, we are guided by the term “augmented intelligence” rather than “artificial intelligence.” It is the critical difference between systems that enhance and scale human expertise rather than those that attempt to replicate all of human intelligence.
 IBM’s Response to a White House RFI: Preparing for the Future of Artificial Intelligence

While there are many stories of how advances in AI impact daily life—from robots for senior care to the self-driving Google car—available for public debate, there are also many less publicized advances helping the day-to-day life of your average scientist. These include search platforms that sort through primary research papers, and bioinformatics, where software tools manage data analysis for busy scientists.

Every 20 seconds, a new scholarly article is published in biomedicine, resulting in more than 1.5 million per year.
 Introducing Meta Science, Medium.com

AI for science

Before creating data, scientists must first stay current with what’s happening in their subject. This means keeping a close eye on publications. Since very few studies hit the big time of widespread media coverage, researchers need to use online tools to compile reading lists of relevant primary research papers and journals. In life sciences, the weekly publication rate is overwhelming, and a simple query on PubMed, the United States National Library of Medicine’s searchable index, returns dozens of hits. Sorting through these for relevance is extremely difficult and time-consuming—it used to take me several hours every week to stay up to date.

However, new search tools such as Meta Science help researchers navigate through this complex soup by constantly ranking and comparing papers. Instead of just pulling up any article that matches the search term, Meta Science retrieves only those that are highly relevant, using AI to constantly scan and score publications as researchers both publish and cite them in other papers. Another platform, Bioz, does something similar: it ranks experimental procedures, reagents and instruments cited in papers, freeing up research time for actual experimental design

Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data. Bioinformatics has been used for in silico analyses of biological queries using mathematical and statistical techniques.
 Wikipedia

Another major time crunch for researchers is data crunching; there’s a lot more of it these days to analyze. Modern laboratory instruments throw out many more results, and automation means that multiplexing sample analysis is common. The spreadsheets of old simply don’t cut it anymore in these data-hungry days.

Many of the papers I reviewed for the Accelerating Proteomics blog involve software tool design, helping researchers zip through the vast arrays of data coming from modern mass spectrometry. When spectral data files regularly run to gigabyte ranges, researchers need powerful software that cuts through the noise to make sense of peptide identification and quantification. In bioinformatics, researchers, statisticians and mathematics wizards collaborate to create algorithms that make data analysis easier and faster at the click of a mouse.

In the world of protein chemistry, tools such as PREGO make quick work of multiplex assay design. This program designs serial reaction monitoring (SRM) assays, using existing data sets to select the “best performing analytes” for the job. It thus bypasses the rounds of tedious manual selection, testing and validation normally required in assay development, improving SRM performance in addition to freeing up research time. Another program, Novor, allows real-time readouts from peptide fragments zipping through a mass spectrometer, bringing proteomics up to the same speed as genomic sequencing. No more waiting for results.

AI IRL

Talk Science to Me client and biomathematician Shelly DeForte uses bioinformatics to study protein structure and protein disorder and its influence on functionality. She creates algorithms that pair structural assessments of amino acid sequence with functional behaviour in enzymes and other proteins. In silico modelling replaces the need to run multiple assays to measure activity and function.

In her most recent paper, algorithms were used to classify missing regions in protein structure databases. Instead of manually poring over each instance of irregularity in the Protein Data Bank, DeForte and co-author Vladimir Uversky’s bioinformatics tools automatically pick them out for further study and analyze the patterns of disorder seen over multiple absent sites. Work that would take days to eyeball now occurs in a click.

Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look.
— Machine Learning: What it is and why it matters

These few examples show that scientists definitely benefit from advances in technology: without bioinformatics tools such as PREGO, they would be back to crunching the numbers manually to interpret data. Added to the hours spent hunting down relevant papers to read, this manual analysis cuts down on productive hours in the lab and delays results. Furthermore, since many of these tools also use “scary-sounding” machine learning to teach themselves and improve functionality, the speed and accuracy just keep getting better and better.

The verdict

So, wariness justified or not? Technopessimistic or techno-optimistic? I’d suggest optimism, since gaining efficiencies in lab operation and data analysis is a positive benefit. Definitely not doom or gloom.

Time to rethink “brave new world” as a stock science headline?

Huxley originally meant his title as an ironic statement about the society the novel portrays. As Maeve Maddox suggests in Daily Writing Tips, maybe we writers should use it “only in a context of dehumanization or oppressive surveillance.”

Okay, I suppose automation is a form of dehumanization, removing the hands-on element from search or analysis. Also, #confession: at times I did feel “controlled” by my lab schedule; however, at no point did I feel that the thermal cycler was watching me.

Let’s stop using “brave new world” to signal new frontiers in science, or anywhere else for that matter…unless we’re truly being oppressed!