Category Archives: technology

Slaves of productivity

Quinn Norton writes: We dream now of making Every Moment Count, of achieving flow and never leaving, creating one project that must be better than the last, of working harder and smarter. We multitask, we update, and we conflate status with long hours worked in no paid overtime systems for the nebulous and fantastic status of being Too Important to have Time to Ourselves, time to waste. But this incarnation of the American dream is all about doing, and nothing about doing anything good, or even thinking about what one was doing beyond how to do more of it more efficiently. It was not even the surrenders to hedonism and debauchery or greed our literary dreams have recorded before. It is a surrender to nothing, to a nothingness of lived accounting.

This moment’s goal of productivity, with its all-consuming practice and unattainable horizon, is perfect for our current corporate world. Productivity never asks what it builds, just how much of it can be piled up before we leave or die. It is irrelevant to pleasure. It’s agnostic about the fate of humanity. It’s not even selfish, because production negates the self. Self can only be a denominator, holding up a dividing bar like a caryatid trying to hold up a stone roof.

I am sure this started with the Industrial Revolution, but what has swept through this generation is more recent. This idea of productivity started in the 1980s, with the lionizing of the hardworking greedy. There’s a critique of late capitalism to be had for sure, but what really devastated my generation was the spiritual malaise inherent in Taylorism’s perfectly mechanized human labor. But Taylor had never seen a robot or a computer perfect his methods of being human. By the 1980s, we had. In the age of robots we reinvented the idea of being robots ourselves. We wanted to program our minds and bodies and have them obey clocks and routines. In this age of the human robot, of the materialist mind, being efficient took the pre-eminent spot, beyond goodness or power or wisdom or even cruel greed. [Continue reading…]

Facebooktwittermail

The faster we go, the more time we lose

Mark C. Taylor writes: “Sleeker. Faster. More Intuitive” (The New York Times); “Welcome to a world where speed is everything” (Verizon FiOS); “Speed is God, and time is the devil” (chief of Hitachi’s portable-computer division). In “real” time, life speeds up until time itself seems to disappear—fast is never fast enough, everything has to be done now, instantly. To pause, delay, stop, slow down is to miss an opportunity and to give an edge to a competitor. Speed has become the measure of success—faster chips, faster computers, faster networks, faster connectivity, faster news, faster communications, faster transactions, faster deals, faster delivery, faster product cycles, faster brains, faster kids. Why are we so obsessed with speed, and why can’t we break its spell?

The cult of speed is a modern phenomenon. In “The Futurist Manifesto” in 1909, Filippo Tommaso Marionetti declared, “We say that the splendor of the world has been enriched by a new beauty: the beauty of speed.” The worship of speed reflected and promoted a profound shift in cultural values that occurred with the advent of modernity and modernization. With the emergence of industrial capitalism, the primary values governing life became work, efficiency, utility, productivity, and competition. When Frederick Winslow Taylor took his stopwatch to the factory floor in the early 20th century to increase workers’ efficiency, he began a high-speed culture of surveillance so memorably depicted in Charlie Chaplin’s Modern Times. Then, as now, efficiency was measured by the maximization of rapid production through the programming of human behavior.

With the transition from mechanical to electronic technologies, speed increased significantly. The invention of the telegraph, telephone, and stock ticker liberated communication from the strictures imposed by the physical means of conveyance. Previously, messages could be sent no faster than people, horses, trains, or ships could move. By contrast, immaterial words, sounds, information, and images could be transmitted across great distances at very high speed. During the latter half of the 19th century, railway and shipping companies established transportation networks that became the backbone of national and international information networks. When the trans-Atlantic cable (1858) and transcontinental railroad (1869) were completed, the foundation for the physical infrastructure of today’s digital networks was in place.

Fast-forward 100 years. During the latter half of the 20th century, information, communications, and networking technologies expanded rapidly, and transmission speed increased exponentially. But more than data and information were moving faster. Moore’s Law, according to which the speed of computer chips doubles every two years, now seems to apply to life itself. Plugged in 24/7/365, we are constantly struggling to keep up but are always falling further behind. The faster we go, the less time we seem to have. As our lives speed up, stress increases, and anxiety trickles down from managers to workers, and parents to children. [Continue reading…]

Facebooktwittermail

When digital nature replaces nature

Diane Ackerman writes: Last summer, I watched as a small screen in a department store window ran a video of surfing in California. That simple display mesmerized high-heeled, pin-striped, well-coiffed passersby who couldn’t take their eyes off the undulating ocean and curling waves that dwarfed the human riders. Just as our ancient ancestors drew animals on cave walls and carved animals from wood and bone, we decorate our homes with animal prints and motifs, give our children stuffed animals to clutch, cartoon animals to watch, animal stories to read. Our lives trumpet, stomp, and purr with animal tales, such as The Bat Poet, The Velveteen Rabbit, Aesop’s Fables, The Wind in the Willows, The Runaway Bunny, and Charlotte’s Web. I first read these wondrous books as a grown-up, when both the adult and the kid in me were completely spellbound. We call each other by “pet” names, wear animal-print clothes. We ogle plants and animals up close on screens of one sort or another. We may not worship or hunt the animals we see, but we still regard them as necessary physical and spiritual companions. It seems the more we exile ourselves from nature, the more we crave its miracle waters. Yet technological nature can’t completely satisfy that ancient yearning.

What if, through novelty and convenience, digital nature replaces biological nature? Gradually, we may grow used to shallower and shallower experiences of nature. Studies show that we’ll suffer. Richard Louv writes of widespread “nature deficit disorder” among children who mainly play indoors — an oddity quite new in the history of humankind. He documents an upswell in attention disorders, obesity, depression, and lack of creativity. A San Diego fourth-grader once told him: “I like to play indoors because that’s where all the electrical outlets are.” Adults suffer equally. It’s telling that hospital patients with a view of trees heal faster than those gazing at city buildings and parking lots. In studies conducted by Peter H. Kahn and his colleagues at the University of Washington, office workers in windowless cubicles were given flat-screen views of nature. They reaped the benefits of greater health, happiness, and efficiency than those without virtual windows. But they weren’t as happy, healthy, or creative as people given real windows with real views of nature.

As a species, we’ve somehow survived large and small ice ages, genetic bottlenecks, plagues, world wars, and all manner of natural disasters, but I sometimes wonder if we’ll survive our own ingenuity. At first glance, it seems like we may be living in sensory overload. The new technology, for all its boons, also bedevils us with speed demons, alluring distractors, menacing highjinks, cyber-bullies, thought-nabbers, calm-frayers, and a spiky wad of miscellaneous news. Some days it feels like we’re drowning in a twittering bog of information. But, at exactly the same time, we’re living in sensory poverty, learning about the world without experiencing it up close, right here, right now, in all its messy, majestic, riotous detail. Like seeing icebergs without the cold, without squinting in the Antarctic glare, without the bracing breaths of dry air, without hearing the chorus of lapping waves and shrieking gulls. We lose the salty smell of the cold sea, the burning touch of ice. If, reading this, you can taste those sensory details in your mind, is that because you’ve experienced them in some form before, as actual experience? If younger people never experience them, can they respond to words on the page in the same way?

The farther we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. [Continue reading…]

Facebooktwittermail

Brain shrinkage, poor concentration, anxiety, and depression linked to media-multitasking

Simultaneously using mobile phones, laptops and other media devices could be changing the structure of our brains, according to new University of Sussex research.

A study published today (24 September) in PLOS ONE reveals that people who frequently use several media devices at the same time have lower grey-matter density in one particular region of the brain compared to those who use just one device occasionally.

The research supports earlier studies showing connections between high media-multitasking activity and poor attention in the face of distractions, along with emotional problems such as depression and anxiety.

But neuroscientists Kep Kee Loh and Dr Ryota Kanai point out that their study reveals a link rather than causality and that a long-term study needs to be carried out to understand whether high concurrent media usage leads to changes in the brain structure, or whether those with less-dense grey matter are more attracted to media multitasking. [Continue reading…]

Facebooktwittermail

No, a ‘supercomputer’ did NOT pass the Turing Test for the first time and everyone should know better

Follow numerous “reports” (i.e. numerous regurgitations of a press release from Reading University) on an “historic milestone in artificial intelligence” having been passed “for the very first time by supercomputer Eugene Goostman” at an event organized by Professor Kevin Warwick, Mike Masnick writes:

If you’ve spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He’s been doing it for decades. All the way back in 2000, we were writing about all the ridiculous press he got for claiming to be the world’s first “cyborg” for implanting a chip in his arm. There was even a — since taken down — Kevin Warwick Watch website that mocked and categorized all of his media appearances in which gullible reporters simply repeated all of his nutty claims. Warwick had gone quiet for a while, but back in 2010, we wrote about how his lab was getting bogus press for claiming to have “the first human infected with a computer virus.” The Register has rightly referred to Warwick as both “Captain Cyborg” and a “media strumpet” and have long been chronicling his escapades in exaggerating bogus stories about the intersection of humans and computers for many, many years.

Basically, any reporter should view extraordinary claims associated with Warwick with extreme caution. But that’s not what happened at all. Instead, as is all too typical with Warwick claims, the press went nutty over it, including publications that should know better.

Anyone can try having a “conversation” with Eugene Goostman.

If the strings of words it spits out give you the impression you’re talking to a human being, that’s probably an indication that you don’t spend enough time talking to human beings.

Facebooktwittermail

Physicists report finding reliable way to teleport data

The New York Times reports: Scientists in the Netherlands have moved a step closer to overriding one of Albert Einstein’s most famous objections to the implications of quantum mechanics, which he described as “spooky action at a distance.”

In a paper published on Thursday in the journal Science, physicists at the Kavli Institute of Nanoscience at the Delft University of Technology reported that they were able to reliably teleport information between two quantum bits separated by three meters, or about 10 feet.

Quantum teleportation is not the “Star Trek”-style movement of people or things; rather, it involves transferring so-called quantum information — in this case what is known as the spin state of an electron — from one place to another without moving the physical matter to which the information is attached.

Classical bits, the basic units of information in computing, can have only one of two values — either 0 or 1. But quantum bits, or qubits, can simultaneously describe many values. They hold out both the possibility of a new generation of faster computing systems and the ability to create completely secure communication networks.

Moreover, the scientists are now closer to definitively proving Einstein wrong in his early disbelief in the notion of entanglement, in which particles separated by light-years can still appear to remain connected, with the state of one particle instantaneously affecting the state of another. [Continue reading…]

Facebooktwittermail

The man who wants to save the world and make a little money on the side

Bruce Falconer writes: In July 2012, a commercial fishing charter called Ocean Pearl motored through the frigid waters of the North Pacific. It carried 100 tons of iron dust and a crew of 11, led by a tall and heavyset 62-year-old American named Russ George. Passing beyond Canada’s territorial limit, the vessel arrived at an area of swirling currents known as the Haida eddies. There, in an eddy that had been chosen for the experiment, George and his crew mixed their cargo of iron with seawater and pumped it into the ocean through a hose, turning the waters a cloudy red. In early August, the ship returned to port, where the crew loaded an additional 20 tons of iron. They dumped it near the same Haida eddy a few weeks later, bringing to an end the most audacious and, before long, notorious attempt yet undertaken by man to modify Earth’s climate.

The expedition was grand in its aims and obscure in its patronage. Funding George’s voyage was a village of Haida Indians on Haida Gwaii, a remote Canadian archipelago about 500 miles northwest of Vancouver. George and his business partners had gained the town’s support for a project of dumping iron dust into the ocean to stimulate the growth of a plankton bloom. The plankton would help feed starving salmon, upon which the Haida had traditionally depended for their livelihood, and also remove a million tons of carbon dioxide from the atmosphere. (In algae form, plankton, like all plants, absorbs CO2 through photosynthesis.) The intended result: a replenished fish population—and millions of dollars’ worth of “carbon credits” that could be sold on the international market.

Back on land, in Vancouver, George and his associates drafted a report on the expedition. It claimed that Ocean Pearl had seeded more than 3,800 square miles of barren waters, leaving in its wake “a verdant emerald sea lush with the growth of a hundred million tonnes of plankton.” According to the account, fin, sperm, and sei whales, rarely seen in the region, appeared in large numbers, along with killer whales, dolphins, schools of albacore tuna, and armies of night-feeding squid. Albatross, storm petrels, sooty shearwaters, and other seabirds had circled above the ship, while flocks of Brant geese came to rest on the water and drifted with the bloom.

But George did little to publicize these findings. Instead, he set about compiling the data in private, telling people that he intended to produce a precise estimate of the CO2 he had removed from the atmosphere and then invite an independent auditor to certify his claims.

If that was the plan, it quickly fell apart. In October 2012, the Guardian of London broke the news of George’s expedition, saying it “contravenes two UN conventions” against large-scale ocean fertilization experiments. Numerous media outlets followed up with alarmed, often savage, reports, some of which went so far as to label George a “rogue geoengineer” or “eco-terrorist.” Amid the uproar, Canadian environment minister Peter Kent accused George of “rogue science” and promised that any violation of the country’s environmental law would be “prosecuted to the full extent.”

George, for his part, spoke of media misrepresentation, and he stressed that he was engaged in cautious research. Amid the controversy, in an interview with Scientific American, he was asked whether his iron fertilization had worked. “We don’t know,” he answered. “The correct attitude is: ‘Data, speak to me.’ Do the work, get the data, let it speak to you and tell you what the facts might be.” While most commenters seemed to think George had gone too far, some expressed sympathy—or at least puzzled ambivalence. A Salon headline the following summer asked, “Does Russ George Deserve a Nobel Prize or a Prison Sentence?

George’s efforts place him in the company of a small but growing group of people convinced that global warming can be halted only with the aid of dramatic intervention in our planet’s natural processes, an approach known as geoengineering. The fixes envisioned by geoengineers range from the seemingly trivial, like painting roads and roofs white to reflect solar radiation, to the extraterrestrial, like a proposal by one Indian physicist to use the explosive power of nuclear fusion to elongate Earth’s orbit by one or two percent, thus reducing solar intensity. (It would also add 5.5 days to the year.)

Because its methods tend to be both ambitious and untested, geoengineering is closely tied to the dynamics of alarm—feeding on it and causing it in equal measure. [Continue reading…]

Facebooktwittermail

The Naked Future and Social Physics — review

Evgeny Morozov writes: In “On What We Can Not Do,” a short and pungent essay published a few years ago, the Italian philosopher Giorgio Agamben outlined two ways in which power operates today. There’s the conventional type that seeks to limit our potential for self-­development by restricting material resources and banning certain behaviors. But there’s also a subtler, more insidious type, which limits not what we can do but what we can not do. What’s at stake here is not so much our ability to do things but our capacity not to make use of that very ability.

While each of us can still choose not to be on Facebook, have a credit history or build a presence online, can we really afford not to do any of those things today? It was acceptable not to have a cellphone when most people didn’t have them; today, when almost everybody does and when our phone habits can even be used to assess whether we qualify for a loan, such acts of refusal border on the impossible.

For Agamben, it’s this double power “to be and to not be, to do and to not do” that makes us human. This active necessity to choose (and err) contributes to the development of individual faculties that shape our subjectivity. The tragedy of modern man, then, is that “he has become blind not to his capacities but to his incapacities, not to what he can do but to what he cannot, or can not, do.”

This blindness to the question of incapacities mars most popular books on recent advances in our ability to store, analyze and profit from vast amounts of data generated by our gadgets. (Our wherewithal not to call this phenomenon by the ugly, jargony name of Big Data seems itself to be under threat.) The two books under review, alas, are no exception.

In “The Naked Future,” Patrick Tucker, an editor at large for The Futurist magazine, surveys how this influx of readily available data will transform every domain of our existence, from improving our ability to predict earthquakes (thanks to the proliferation of sensors) to producing highly customized education courses that would tailor their content and teaching style, in real time, to the needs of individual students. His verdict: It’s all for the better.

Since most of us lead rather structured, regular lives — work, home, weekend — even a handful of data points (our location, how often we call our friends) proves useful in predicting what we may be doing a day or a year from now. “A flat tire on a Monday at 10 a.m. isn’t actually random. . . . We just don’t yet know how to model it,” Tucker writes.

Seeking to integrate data streams from multiple sources — our inboxes, our phones, our cars and, with its recent acquisition of a company that makes thermostats and smoke detectors, our bedrooms — a company like Google is well positioned not just to predict our future but also to detect just how much risk we take on every day, be it fire, a flat tire or a default on a loan. (Banks and insurance companies beware: You will be disrupted next!)

With so much predictive power, we may soon know the exact price of “preferring not to,” as a modern-day Bartleby might put it. [Continue reading…]

Facebooktwittermail

Astra Taylor: Misogyny and the cult of internet openness

In December, and again in February, at the Google Bus blockades in San Francisco, one thing struck me forcefully: the technology corporation employees waiting for their buses were all staring so intently at their phones that they apparently didn’t notice the unusual things going on around them until their buses were surrounded. Sometimes I feel like I’m living in a science-fiction novel, because my region is so afflicted with people who stare at the tiny screens in their hands on trains, in restaurants, while crossing the street, and too often while driving. San Francisco is, after all, where director Phil Kaufman set the 1978 remake of Invasion of the Body Snatchers, the movie wherein a ferny spore-spouting form of alien life colonizes human beings so that they become zombie-like figures.

In the movies, such colonization took place secretly, or by force, or both: it was a war, and (once upon a time) an allegory for the Cold War and a possible communist takeover. Today, however — Hypercapitalism Invades! — we not only choose to carry around those mobile devices, but pay corporations hefty monthly fees to do so. In return, we get to voluntarily join the great hive of people being in touch all the time, so much so that human nature seems in the process of being remade, with the young immersed in a kind of contact that makes solitude seem like polar ice, something that’s melting away.

We got phones, and then Smart Phones, and then Angry Birds and a million apps — and a golden opportunity to be tracked all the time thanks to the GPS devices in those phones. Your cell phone is the shackle that chains you to corporate America (and potentially to government surveillance as well) and like the ankle bracelets that prisoners wear, it’s your monitor. It connects you to the Internet and so to the brave new world that has such men as Larry Ellison and Mark Zuckerberg in it. That world — maybe not so brave after all — is the subject of Astra Taylor’s necessary, alarming, and exciting new book, The People’s Platform: Taking Back Power and Culture in the Digital Age.

The Internet arose with little regulation, little public decision-making, and a whole lot of fantasy about how it was going to make everyone powerful and how everything would be free. Free, as in unregulated and open, got confused with free, as in not getting paid, and somehow everyone from Facebook to Arianna Huffington created massively lucrative sites (based on advertising dollars) in which the people who made the content went unpaid. Just as Russia woke up with oil oligarchs spreading like mushrooms after a night’s heavy rain, so we woke up with new titans of the tech industry throwing their billionaire weight around. The Internet turns out to be a superb mechanism for consolidating money and power and control, even as it gives toys and minor voices to the rest of us.

As Taylor writes in her book, “The online sphere inspires incessant talk of gift economies and public-spiritedness and democracy, but commercialization and privatization and inequality lurk beneath the surface. This contradiction is captured in a single word: ‘open,’ a concept capacious enough to contain both the communal and capitalistic impulses central to Web 2.0.” And she goes on to discuss, “the tendency of open systems to amplify inequality — and new media thinkers’ glib disregard for this fundamental characteristic.”  Part of what makes her book exceptional, in fact, is its breadth. It reviews much of the existing critique of the Internet and connects the critiques of specific aspects of it into an overview of how a phenomenon supposed to be wildly democratic has become wildly not that way at all.

And at a certain juncture, she turns to gender. Though far from the only weak point of the Internet as an egalitarian space — after all, there’s privacy (lack of), the environment (massive server farms), and economics (tax cheats, “content providers” like musicians fleeced) — gender politics, as she shows in today’s post adapted from her book, is one of the most spectacular problems online. Let’s imagine this as science fiction: a group of humans apparently dissatisfied with how things were going on Earth — where women were increasing their rights, representation, and participation — left our orbit and started their own society on their own planet. The new planet wasn’t far away or hard to get to (if you could afford the technology): it was called the Internet. We all know it by name; we all visit it; but we don’t name the society that dominates it much.

Taylor does: the dominant society, celebrating itself and pretty much silencing everyone else, makes the Internet bear a striking resemblance to Congress in 1850 or a gentlemen’s club (minus any gentleness). It’s a gated community, and as Taylor describes today, the security detail is ferocious, patrolling its borders by trolling and threatening dissident voices, and just having a female name or being identified as female is enough to become a target of hate and threats.

Early this year, a few essays were published on Internet misogyny that were so compelling I thought 2014 might be the year we revisit these online persecutions, the way that we revisited rape in 2013, thanks to the Steubenville and New Delhi assault cases of late 2012. But the subject hasn’t (yet) quite caught fire, and so not much gets said and less gets done about this dynamic new machinery for privileging male and silencing female voices. Which is why we need to keep examining and discussing this, as well as the other problems of the Internet. And why you need to read Astra Taylor’s book. This excerpt is part of her diagnosis of the problems; the book ends with ideas about a cure. Rebecca Solnit

Open systems and glass ceilings
The disappearing woman and life on the internet
By Astra Taylor

The Web is regularly hailed for its “openness” and that’s where the confusion begins, since “open” in no way means “equal.” While the Internet may create space for many voices, it also reflects and often amplifies real-world inequities in striking ways.

An elaborate system organized around hubs and links, the Web has a surprising degree of inequality built into its very architecture. Its traffic, for instance, tends to be distributed according to “power laws,” which follow what’s known as the 80/20 rule — 80% of a desirable resource goes to 20% of the population.

In fact, as anyone knows who has followed the histories of Google, Apple, Amazon, and Facebook, now among the biggest companies in the world, the Web is increasingly a winner-take-all, rich-get-richer sort of place, which means the disparate percentages in those power laws are only likely to look uglier over time.

Continue reading

Facebooktwittermail

Catherine Crump and Matthew Harwood: The net closes around us

Twice in my life — in the 1960s and the post-9/11 years — I was suddenly aware of clicks and other strange noises on my phone.  In both periods, I’ve wondered what the story was, and then made self-conscious jokes with whoever was on the other end of the line about those who might (or might not) be listening in.  Twice in my life I’ve felt, up close and personal, that ominous, uncomfortable, twitchy sense of being overheard, without ever knowing if it was a manifestation of the paranoia of the times or of realism — or perhaps of both.

I’m conceptually outraged by mass surveillance, but generally my personal attitude has always been: Go ahead.  Read my email, listen to my phone calls, follow my web searches, check out my location via my cell phone.  My tweets don’t exist — but if they did, I’d say have at ‘em.  I don’t give a damn.

And in some sense, I don’t, even though everyone, including me, is embarrassed by something.  Everyone says something about someone they would rather not have made public (or perhaps have even said).  Everyone has some thing — or sometimes many things — they would rather keep to themselves.

Increasingly, however, as the U.S. surveillance state grows ever more pervasive, domestically and globally, as the corporate version of the same expands exponentially, as prying “eyes” and “ears” of every technological variety proliferate, the question of who exactly we are arises.  What are we without privacy, without a certain kind of unknowability?  What are we when “our” information is potentially anyone’s information?  We may soon find out.  A recent experiment by two Stanford University graduate students who gathered just a few month’s worth of phone metadata on 546 volunteers has, for instance, made mincemeat of President Obama’s claim that the NSA’s massive version of metadata collection “is not looking at people’s names and they’re not looking at content.”  Using only the phone metadata they got, the Stanford researchers “inferred sensitive information about people’s lives, including: neurological and heart conditions, gun ownership, marijuana cultivation, abortion, and participation in Alcoholics Anonymous.”

And that’s just a crude version of what the future holds for all of us.  There are various kinds of extinctions.  That superb environmental reporter Elizabeth Kolbert has just written a powerful book, The Sixth Extinction, about the more usual (if horrifying) kind.  Our developing surveillance world may offer us an example of another kind of extinction: of what we once knew as the private self.  If you want to be chilled to the bone when it comes to this, check out today’s stunning report by the ACLU’s Catherine Crump and Matthew Harwood on where the corporate world is taking your identity. Tom Engelhardt

Invasion of the data snatchers
Big Data and the Internet of Things means the surveillance of everything
By Catherine Crump and Matthew Harwood

Estimates vary, but by 2020 there could be over 30 billion devices connected to the Internet. Once dumb, they will have smartened up thanks to sensors and other technologies embedded in them and, thanks to your machines, your life will quite literally have gone online. 

The implications are revolutionary. Your smart refrigerator will keep an inventory of food items, noting when they go bad. Your smart thermostat will learn your habits and adjust the temperature to your liking. Smart lights will illuminate dangerous parking garages, even as they keep an “eye” out for suspicious activity.

Techno-evangelists have a nice catchphrase for this future utopia of machines and the never-ending stream of information, known as Big Data, it produces: the Internet of Things.  So abstract. So inoffensive. Ultimately, so meaningless.

A future Internet of Things does have the potential to offer real benefits, but the dark side of that seemingly shiny coin is this: companies will increasingly know all there is to know about you.  Most people are already aware that virtually everything a typical person does on the Internet is tracked. In the not-too-distant future, however, real space will be increasingly like cyberspace, thanks to our headlong rush toward that Internet of Things. With the rise of the networked device, what people do in their homes, in their cars, in stores, and within their communities will be monitored and analyzed in ever more intrusive ways by corporations and, by extension, the government.

And one more thing: in cyberspace it is at least theoretically possible to log off.  In your own well-wired home, there will be no “opt out.”

Continue reading

Facebooktwittermail