Search Results for: Science

The Nightmare Dream of a Thinking Machine

The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Projectfeatured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973’s Westworld went crazy and started killing.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

***

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it “could spell the end of the human race.” Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostrom’s center, this is an organization that says it is “working to mitigate existential risks facing humanity,” the ones that could arise “from the development of human-level artificial intelligence.”

Paul Ford writing in MIT Technology Review about our conceptions of artificial intelligence, and why they can scare us.

Read the story

What Would a More Efficient Clinical Trial System Look Like?

Photo: Pixabay

What might a more-efficient trial system look like? One collaboration in Chicago offers a possible way forward.

Working together, several of the city’s academic medical centers have established a joint network for conducting clinical trials. Participating institutions now routinely interview all of their hospitalized patients, regardless of diagnosis, to keep detailed records on their health status. With permission, those records are made available to researchers.

Over 15 years, the process has enrolled 100,000 patients, many of whom are then recruited for clinical trials, said David O. Meltzer, a professor of medicine and director of the Center for Health and the Social Sciences at the University of Chicago. Much of the data is collected by undergraduates, and the team has grown large enough that newcomers can be trained without the need to constantly rebuild for each new trial, Dr. Meltzer said. “It’s wildly cost-effective,” he said, “and it’s incredibly good for the students.”

Even more savings could be realized by reconsidering when trial participants are even needed. A dozen years ago, Benjamin A. Olken, a professor of economics at the Massachusetts Institute of Technology, wanted to study corruption in Indonesia, to learn which of two strategies—threatening audits of government officials or giving community members a more direct role in monitoring—would do a better job of keeping road builders from “cheating.”

Paul Basken writing in The Chronicle of Higher Education about what he learned over the course of seven years as a participant in a medical clinical trial.

Read the story

Really Good Shit: A Reading List

Edited and cropped image by Quinn Dombrowski (CC BY-SA 2.0)

As the Japanese children’s book author Tarō Gomi once wrote: everyone poops. But we don’t talk about this openly or often enough. In fact, talking and reading about poop might make you want to hold your nose — but it’ll also open your eyes. Here are nine pieces about shit, from a DIY mixture a woman used to treat her life-threatening infection, to prehistoric poo that brings us one step closer to understanding the origins of life after the dinosaur age.

“The Magic Poop Potion” (Lina Zeldovich, Narratively, July 2014)

Suffering from a recurring intestinal infection called C. diff, Catherine Duff decided to take matters into her own hands. Using healthy stool from her husband, they concocted an unconventional cocktail — using a plastic enema, blender, and a cheese cloth — which he then transferred into her. This procedure, known as fecal microbiota transplant (FMT), saved her life. Duff advocated for FMT as a viable treatment when the FDA considered regulating it as an “investigational new drug,” and founded the Fecal Transplant Foundation to educate the public and to connect patients, doctors, and stool donors. Read more…

Using the British Railway Mania of the 1840s to Explain the Beanie Baby Craze

https://www.flickr.com/photos/23488805@N02/

Andrew Odlyzko, a mathematician and bubble expert, proposes a simpler theory explaining speculative panics in his study on the British Railway Mania of the 1840s. Odlyzko credits Railway Mania in part to a “collective hallucination,” an extreme form of groupthink wherein a significant chunk of society feverishly buys into a shared dream with no regard for the skeptics and naysayers. (Some scholars think Jesus’ resurrection might have been an acute instance of collective hallucination.)

The existence of groupthink has been confirmed in a rich assortment of studies, and Odlyzko’s theory expands the idea to economic bubbles. Under his analysis, the initial coterie of Beanie Baby collectors comprised an in-group that shared the great secret of Beanie Babies’ worth. As more people discovered the toy, they yearned to learn this secret and partake in the impending financial success of the Beanie Babies market. Soon, millions of Americans were gripped by the conviction that they had discovered an easy path to personal wealth. And thanks to their collective hallucination of Beanie Babies’ worth, none of these collectors ever realized that the only thing driving the Beanie Babies market was their own conviction that the toys were valuable.

These theories may explain the mass delusions that enabled a large chunk of the country to believe that a $5 Beanie Baby could eventually be worth thousands. What they never quite get at, however, is that initial spark of fascination: how the ineffable appeal of Beanie Babies turned them, and not one of a thousand other 1990s trends, into a collective mania. That allure can probably never be quantified.

Mark Joseph Stern writing in Slate about the economics and psychology of the Beanie Babies craze.

Read the story

Glamorous Crossing: How Pan Am Airways Dominated International Travel in the 1930s

Meredith Hindley | Longreads | February 2015 | 18 minutes (4,383 words)

 

In August 1936, Americans retreated from the summer heat into movie theaters to watch China Clipper, the newest action-adventure from Warner Brothers. The film starred Pat O’Brien as an airline executive obsessed with opening the first airplane route across the Pacific Ocean. An up-and-coming Humphrey Bogart played a grizzled pilot full of common sense and derring-do.

The real star of the film, however, was the China Clipper, a gleaming four-engine silver Martin M-130. As the Clipper makes its maiden flight in the film, the flying boat cuts a white wake into the waters off San Francisco before soaring in the air and passing over a half-constructed Golden Gate Bridge. As it crosses the Pacific, cutting through the clouds and battling a typhoon, a team of radiomen and navigators follow its course on the ground, relaying updated weather information. The plane arrives in Macao to a harbor packed with cheering spectators and beaming government officials. Read more…

Taking the Slow Road: An Interview with Author Katherine Heiny

Photo by Leila Barbaro

Sari Botton | Longreads | February 2015 | 14 minutes (3,683 words)

 

 

Ed. note: Katherine Heiny will be in conversation with Sari Botton at McNally Jackson in New York on Wednesday, Feb. 11 at 7 p.m.

* * *

In the fall of 1992, I found myself very much affected by “How to Give the Wrong Impression,” a short story in the September 21 issue of The New Yorker about a twentysomething psych grad student who’s trying hard to seem satisfied keeping things platonic between her and her handsome roommate.

To begin with, I had a lot in common with the protagonist, more than I’d have wanted to admit at the time. I was in my twenties, too—27 to be exact—newly divorced from the second person I’d ever so much as dated, and most importantly, I was very busy trying to seem satisfied keeping things platonic with a rakish “friend.” I didn’t just recognize that young woman, I was her at that moment in my life. Read more…

Blast Force: The Invisible War on the Brain

After the First World War, family and friends said that sometimes, boys came back from overseas “not right in the head.” Nearly 100 years later, the American military is only just starting to understand the effects of bomb blasts on soldiers’ brains and the prescience of those casual observations. Caroline Alexander reports in National Geographic on Traumatic Brain Injury and its devastating effects on soldiers and their families.

“Most of our medical research on blast injuries was either on fragmentation wounds or what happens in gas-filled organs—everyone was always concerned in a thermonuclear explosion what happened to your lungs and your gastrointestinal tract,” Lt. Col. Kevin “Kit” Parker, the Tarr Family Professor of Bioengineering and Applied Physics at Harvard, told me. “We completely overlooked the brain. Today the enemy has developed a weapon system that is targeted toward our scientific weak spot.”

Parker, a towering figure with a shaved head and booming voice, is also a former U.S. Army infantry officer who served two tours in Afghanistan, where he saw and felt the effects of blast force. “There was a flash in the sky, and I turned back toward the mountains where the fighting was,” Parker said, recalling the day in January 2003 when, in the hills of Kandahar, the shock wave from a distant explosion passed through his body. “It just felt like it lifted my innards and put them back down.”
Read more…

Link Rot, or Why the Web May Be Killing Footnotes

The Web dwells in a never-ending present. It is—elementally—ethereal, ephemeral, unstable, and unreliable. Sometimes when you try to visit a Web page what you see is an error message: “Page Not Found.” This is known as “link rot,” and it’s a drag, but it’s better than the alternative. More often, you see an updated Web page; most likely the original has been overwritten. (To overwrite, in computing, means to destroy old data by storing new data in their place; overwriting is an artifact of an era when computer storage was very expensive.) Or maybe the page has been moved and something else is where it used to be. This is known as “content drift,” and it’s more pernicious than an error message, because it’s impossible to tell that what you’re seeing isn’t what you went to look for: the overwriting, erasure, or moving of the original is invisible. For the law and for the courts, link rot and content drift, which are collectively known as “reference rot,” have been disastrous. In providing evidence, legal scholars, lawyers, and judges often cite Web pages in their footnotes; they expect that evidence to remain where they found it as their proof, the way that evidence on paper—in court records and books and law journals—remains where they found it, in libraries and courthouses. But a 2013 survey of law- and policy-related publications found that, at the end of six years, nearly fifty per cent of the URLs cited in those publications no longer worked. According to a 2014 study conducted at Harvard Law School, “more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information.” The overwriting, drifting, and rotting of the Web is no less catastrophic for engineers, scientists, and doctors. Last month, a team of digital library researchers based at Los Alamos National Laboratory reported the results of an exacting study of three and a half million scholarly articles published in science, technology, and medical journals between 1997 and 2012: one in five links provided in the notes suffers from reference rot. It’s like trying to stand on quicksand.

The footnote, a landmark in the history of civilization, took centuries to invent and to spread. It has taken mere years nearly to destroy. A footnote used to say, “Here is how I know this and where I found it.” A footnote that’s a link says, “Here is what I used to know and where I once found it, but chances are it’s not there anymore.” It doesn’t matter whether footnotes are your stock-in-trade. Everybody’s in a pinch. Citing a Web page as the source for something you know—using a URL as evidence—is ubiquitous. Many people find themselves doing it three or four times before breakfast and five times more before lunch. What happens when your evidence vanishes by dinnertime?

Jill Lepore, writing for the New Yorker about the Internet Archive and the difficulties of preserving information on the Web.

Read the story

An Ex-Industrial Fisherman Rethinks His Job

Bren Smith. Photo by echoinggreen

Diane Ackerman | The Human Age: The World Shaped By Us | W. W. Norton & Company | September 2014 | 16 minutes (3,877 words)

 

Below is an excerpt from the book The Human Age: The World Shaped By Us, by Diane Ackerman, as recommended by Longreads contributor Dana Snitzky. Read more…

Really Old Stuff: A Reading List About Our Prehistoric Past

Image: Lisa Weichel

Even with digital archives and electronic records keeping track of our lives, we often find it a challenge to piece together our own pasts, to say nothing of our parents’ or grandparents’. What, then, of the lives of humans and organisms whose only traces are already thousands of years old?

From an aspen colony that has been cloning itself for over 80,000 years to a coral reef fossilized eons ago, these stories bring to life irretrievable worlds and challenge our notions of time and durability.

1. “First Artists” (Chip Walter, National Geographic Magazine, January 2015)

Admiring intricate cave paintings in France, Germany, and South Africa, Walter explores how humans laid the foundation to visual art in “sporadic flare-ups of creativity” some 30,000-60,000 years ago.

Read more…