Frank Pasquale writes: In a recent podcast series called Instaserfs, a former Uber driver named Mansour gave a chilling description of the new, computer-mediated workplace. First, the company tried to persuade him to take a predatory loan to buy a new car. Apparently a number cruncher deemed him at high risk of defaulting. Second, Uber would never respond in person to him – it just sent text messages and emails. This style of supervision was a series of take-it-or-leave-it ultimatums – a digital boss coded in advance.
Then the company suddenly took a larger cut of revenues from him and other drivers. And finally, what seemed most outrageous to Mansour: his job could be terminated without notice if a few passengers gave him one-star reviews, since that could drag his average below 4.7. According to him, Uber has no real appeal recourse or other due process in play for a rating system that can instantly put a driver out of work – it simply crunches the numbers.
Mansour’s story compresses long-standing trends in credit and employment – and it’s by no means unique. Online retailers live in fear of a ‘Google Death Penalty’ – a sudden, mysterious drop in search-engine rankings if they do something judged fraudulent by Google’s spam detection algorithms. Job applicants at Walmart in the US and other large companies take mysterious ‘personality tests’, which process their responses in undisclosed ways. And white-collar workers face CV-sorting software that may understate, or entirely ignore, their qualifications. One algorithmic CV analyser found all 29,000 people who applied for a ‘reasonably standard engineering position’ unqualified.
The infancy of the internet is over. As online spaces mature, Facebook, Google, Apple, Amazon, and other powerful corporations are setting the rules that govern competition among journalists, writers, coders, and e-commerce firms. Uber and Postmates and other platforms are adding a code layer to occupations like driving and service work. Cyberspace is no longer an escape from the ‘real world’. It is now a force governing it via algorithms: recipe-like sets of instructions to solve problems. From Google search to OkCupid matchmaking, software orders and weights hundreds of variables into clean, simple interfaces, taking us from query to solution. Complex mathematics govern such answers, but it is hidden from plain view, thanks either to secrecy imposed by law, or to complexity outsiders cannot unravel. [Continue reading…]
Category Archives: technology
Google’s search algorithm could steal the presidency
Wired: Imagine an election — a close one. You’re undecided. So you type the name of one of the candidates into your search engine of choice. (Actually, let’s not be coy here. In most of the world, one search engine dominates; in Europe and North America, it’s Google.) And Google coughs up, in fractions of a second, articles and facts about that candidate. Great! Now you are an informed voter, right? But a study published this week says that the order of those results, the ranking of positive or negative stories on the screen, can have an enormous influence on the way you vote. And if the election is close enough, the effect could be profound enough to change the outcome.
In other words: Google’s ranking algorithm for search results could accidentally steal the presidency. “We estimate, based on win margins in national elections around the world,” says Robert Epstein, a psychologist at the American Institute for Behavioral Research and Technology and one of the study’s authors, “that Google could determine the outcome of upwards of 25 percent of all national elections.” [Continue reading…]
Musk, Hawking, and Chomsky warn of impending robot wars fueled by artificial intelligence
The Verge reports: Leading artificial intelligence researchers have warned that an “AI arms race” could be disastrous for humanity, and are urging the UN to consider a ban on “offensive autonomous weapons.” An open letter published by the Future of Life Institute (FLI) and signed by high-profile figures including Stephen Hawking, Elon Musk, and Noam Chomsky, warns that weapons that automatically “select and engage targets without human intervention” could become the “Kalashnikovs of tomorrow,” fueling war, terrorism, and global instability.
“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades,” states the letter, citing armed quadcopters (a technology that has already been deployed in a very crude fashion) as an example. The letter notes that although it’s possible that the use of autonomous weapons could reduce human casualties on the battlefield, this itself could be a mistake as it would “[lower] the threshold” for going to war. [Continue reading…]
The risks that GMOs may pose to the global ecosystem
Mark Spitznagel and Nassim Nicholas Taleb, who both anticipated the failure of the financial system in 2007, see eerie parallels in the reasoning being used by those who believed in stability then and those who insist now that there are no significant risks involved in the promotion of genetically modified organisms (GMOs).
Spitznagel and Taleb write: First, there has been a tendency to label anyone who dislikes G.M.O.s as anti-science — and put them in the anti-antibiotics, antivaccine, even Luddite category. There is, of course, nothing scientific about the comparison. Nor is the scholastic invocation of a “consensus” a valid scientific argument.
Interestingly, there are similarities between arguments that are pro-G.M.O. and snake oil, the latter having relied on a cosmetic definition of science. The charge of “therapeutic nihilism” was leveled at people who contested snake oil medicine at the turn of the 20th century. (At that time, anything with the appearance of sophistication was considered “progress.”)
Second, we are told that a modified tomato is not different from a naturally occurring tomato. That is wrong: The statistical mechanism by which a tomato was built by nature is bottom-up, by tinkering in small steps (as with the restaurant business, distinct from contagion-prone banks). In nature, errors stay confined and, critically, isolated.
Third, the technological salvation argument we faced in finance is also present with G.M.O.s, which are intended to “save children by providing them with vitamin-enriched rice.” The argument’s flaw is obvious: In a complex system, we do not know the causal chain, and it is better to solve a problem by the simplest method, and one that is unlikely to cause a bigger problem.
Fourth, by leading to monoculture — which is the same in finance, where all risks became systemic — G.M.O.s threaten more than they can potentially help. Ireland’s population was decimated by the effect of monoculture during the potato famine. Just consider that the same can happen at a planetary scale.
Fifth, and what is most worrisome, is that the risk of G.M.O.s are more severe than those of finance. They can lead to complex chains of unpredictable changes in the ecosystem, while the methods of risk management with G.M.O.s — unlike finance, where some effort was made — are not even primitive.
The G.M.O. experiment, carried out in real time and with our entire food and ecological system as its laboratory, is perhaps the greatest case of human hubris ever. It creates yet another systemic, “too big too fail” enterprise — but one for which no bailouts will be possible when it fails. [Continue reading…]
IBM announces major breakthrough: the world’s first 7-nanometer chips
The New York Times reports: IBM said on Thursday that it had made working versions of ultradense computer chips, with roughly four times the capacity of today’s most powerful chips.
The announcement, made on behalf of an international consortium led by IBM, the giant computer company, is part of an effort to manufacture the most advanced computer chips in New York’s Hudson Valley, where IBM is investing $3 billion in a private-public partnership with New York State, GlobalFoundries, Samsung and equipment vendors.
The development lifts a bit of the cloud that has fallen over the semiconductor industry, which has struggled to maintain its legendary pace of doubling transistor density every two years.
Intel, which for decades has been the industry leader, has faced technical challenges in recent years. Moreover, technologists have begun to question whether the longstanding pace of chip improvement, known as Moore’s Law, would continue past the current 14-nanometer generation of chips.
Each generation of chip technology is defined by the minimum size of fundamental components that switch current at nanosecond intervals. Today the industry is making the commercial transition from what the industry generally describes as 14-nanometer manufacturing to 10-nanometer manufacturing.
Each generation brings roughly a 50 percent reduction in the area required by a given amount of circuitry. IBM’s new chips, though still in a research phase, suggest that semiconductor technology will continue to shrink at least through 2018.
The company said on Thursday that it had working samples of chips with seven-nanometer transistors. It made the research advance by using silicon-germanium instead of pure silicon in key regions of the molecular-size switches.
The new material makes possible faster transistor switching and lower power requirements. The tiny size of these transistors suggests that further advances will require new materials and new manufacturing techniques.
As points of comparison to the size of the seven-nanometer transistors, a strand of DNA is about 2.5 nanometers in diameter and a red blood cell is roughly 7,500 nanometers in diameter. IBM said that would make it possible to build microprocessors with more than 20 billion transistors. [Continue reading…]
On not being there: The data-driven body at work and at play
Rebecca Lemov writes: The protagonist of William Gibson’s 2014 science-fiction novel The Peripheral, Flynne Fisher, works remotely in a way that lends a new and fuller sense to that phrase. The novel features a double future: One set of characters inhabits the near future, ten to fifteen years from the present, while another lives seventy years on, after a breakdown of the climate and multiple other systems that has apocalyptically altered human and technological conditions around the world.
In that “further future,” only 20 percent of the Earth’s human population has survived. Each of these fortunate few is well off and able to live a life transformed by healing nanobots, somaticized e-mail (which delivers messages and calls to the roof of the user’s mouth), quantum computing, and clean energy. For their amusement and profit, certain “hobbyists” in this future have the Borgesian option of cultivating an alternative path in history — it’s called “opening up a stub” — and mining it for information as well as labor.
Flynne, the remote worker, lives on one of those paths. A young woman from the American Southeast, possibly Appalachia or the Ozarks, she favors cutoff jeans and resides in a trailer, eking out a living as a for-hire sub playing video games for wealthy aficionados. Recruited by a mysterious entity that is beta-testing drones that are doing “security” in a murky skyscraper in an unnamed city, she thinks at first that she has been taken on to play a kind of video game in simulated reality. As it turns out, she has been employed to work in the future as an “information flow” — low-wage work, though the pay translates to a very high level of remuneration in the place and time in which she lives.
What is of particular interest is the fate of Flynne’s body. Before she goes to work she must tend to its basic needs (nutrition and elimination), because during her shift it will effectively be “vacant.” Lying on a bed with a special data-transmitting helmet attached to her head, she will be elsewhere, inhabiting an ambulatory robot carapace — a “peripheral” — built out of bio-flesh that can receive her consciousness.
Bodies in this data-driven economic backwater of a future world economy are abandoned for long stretches of time — disposable, cheapened, eerily vacant in the temporary absence of “someone at the helm.” Meanwhile, fleets of built bodies, grown from human DNA, await habitation.
Alex Rivera explores similar territory in his Mexican sci-fi film The Sleep Dealer (2008), set in a future world after a wall erected on the US–Mexican border has successfully blocked migrants from entering the United States. Digital networks allow people to connect to strangers all over the world, fostering fantasies of physical and emotional connection. At the same time, low-income would-be migrant workers in Tijuana and elsewhere can opt to do remote work by controlling robots building a skyscraper in a faraway city, locking their bodies into devices that transmit their labor to the site. In tank-like warehouses, lined up in rows of stalls, they “jack in” by connecting data-transmitting cables to nodes implanted in their arms and backs. Their bodies are in Mexico, but their work is in New York or San Francisco, and while they are plugged in and wearing their remote-viewing spectacles, their limbs move like the appendages of ghostly underwater creatures. Their life force drained by the taxing labor, these “sleep dealers” end up as human discards.
What is surprising about these sci-fi conceits, from “transitioning” in The Peripheral to “jacking in” in The Sleep Dealer, is how familiar they seem, or at least how closely they reflect certain aspects of contemporary reality. Almost daily, we encounter people who are there but not there, flickering in and out of what we think of as presence. A growing body of research explores the question of how users interact with their gadgets and media outlets, and how in turn these interactions transform social relationships. The defining feature of this heavily mediated reality is our presence “elsewhere,” a removal of at least part of our conscious awareness from wherever our bodies happen to be. [Continue reading…]
Why the modern world is bad for your brain
Daniel J Levitin writes: Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.
Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.
But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient. [Continue reading…]
Even the tech savvy should turn off their phones and listen
Bruno Giussani writes: Nothing exists nowadays unless it is Facebooked, Tweeted or Instagrammed with emphasis on “insta”. So perhaps the event that I hosted on Tuesday at the Royal Institution, TEDGlobal London, didn’t exist. Because we ran a little experiment, banning the use of smartphones, tablets, laptops, cameras – any electronic device – during the conference.
At the end of the event (which over two sessions of 100 minutes each featured scientists, technologists, historians, a photographer, a slam poet, a singer, a racing car driver and a writer) I asked the 350 attendees whether we should apply the same rule next time. It’s a safe guess that at least two-thirds of them use Twitter or FB with some regularity, but pretty much every hand in the theatre shot up, with maybe two exceptions. I have heard nothing but positive feedback since. [Continue reading…]
Artificial neural networks on acid
Quartz reports: American sci-fi novelist Philip K. Dick once famously asked, Do Androids Dream of Electric Sheep? While he was on the right track, the answer appears to be, no, they don’t. They dream of dog-headed knights atop horses, of camel-birds and pig-snails, and of Dali-esque mutated landscapes.
Google’s image recognition software, which can detect, analyze, and even auto-caption images, uses artificial neural networks to simulate the human brain. In a process they’re calling “inceptionism,” Google engineers sought out to see what these artificial networks “dream” of — what, if anything, do they see in a nondescript image of clouds, for instance? What does a fake brain that’s trained to detect images of dogs see when it’s shown a picture of a knight?
Google trains the software by feeding it millions of images, eventually teaching it to recognize specific objects within a picture. When it’s fed an image, it is asked to emphasize the object in the image that it recognizes. The network is made up of layers — the higher the layer, the more precise the interpretation. Eventually, in the final output layer, the network makes a “decision” as to what’s in the image.
But the networks aren’t restricted to only identifying images. Their training allows them to generate images as well. [Continue reading…]
How technology is damaging our brains
The New York Times reports: When one of the most important e-mail messages of his life landed in his in-box a few years ago, Kord Campbell overlooked it.
Not just for a day or two, but 12 days. He finally saw it while sifting through old messages: a big company wanted to buy his Internet start-up.
“I stood up from my desk and said, ‘Oh my God, oh my God, oh my God,’ ” Mr. Campbell said. “It’s kind of hard to miss an e-mail like that, but I did.”
The message had slipped by him amid an electronic flood: two computer screens alive with e-mail, instant messages, online chats, a Web browser and the computer code he was writing.
While he managed to salvage the $1.3 million deal after apologizing to his suitor, Mr. Campbell continues to struggle with the effects of the deluge of data. Even after he unplugs, he craves the stimulation he gets from his electronic gadgets. He forgets things like dinner plans, and he has trouble focusing on his family.
His wife, Brenda, complains, “It seems like he can no longer be fully in the moment.”
This is your brain on computers.
Scientists say juggling e-mail, phone calls and other incoming information can change how people think and behave. They say our ability to focus is being undermined by bursts of information.
These play to a primitive impulse to respond to immediate opportunities and threats. The stimulation provokes excitement — a dopamine squirt — that researchers say can be addictive. In its absence, people feel bored.
The resulting distractions can have deadly consequences, as when cellphone-wielding drivers and train engineers cause wrecks. And for millions of people like Mr. Campbell, these urges can inflict nicks and cuts on creativity and deep thought, interrupting work and family life.
While many people say multitasking makes them more productive, research shows otherwise. Heavy multitaskers actually have more trouble focusing and shutting out irrelevant information, scientists say, and they experience more stress.
And scientists are discovering that even after the multitasking ends, fractured thinking and lack of focus persist. In other words, this is also your brain off computers. [Continue reading…]
The secret history of the vocoder
Why the singularity is greatly exaggerated
In 1968, Marvin Minsky said, “Within a generation we will have intelligent computers like HAL in the film, 2001.” What made him and other early AI proponents think machines would think like humans?
Even before Moore’s law there was the idea that computers are going to get faster and their clumsy behavior is going to get a thousand times better. It’s what Ray Kurzweil now claims. He says, “OK, we’re moving up this curve in terms of the number of neurons, number of processing units, so by this projection we’re going to be at super-human levels of intelligence.” But that’s deceptive. It’s a fallacy. Just adding more speed or neurons or processing units doesn’t mean you end up with a smarter or more capable system. What you need are new algorithms, new ways of understanding a problem. In the area of creativity, it’s not at all clear that a faster computer is going to get you there. You’re just going to come up with more bad, bland, boring things. That ability to distinguish, to filter out what’s interesting, that’s still elusive.
Today’s computers, though, can generate an awful lot of connections in split seconds.
But generating is fairly easy and testing pretty hard. In Robert Altman’s movie, The Player, they try to combine two movies to make a better one. You can imagine a computer that just takes all movie titles and tries every combination of pairs, like Reservoir Dogs meets Casablanca. I could write that program right now on my laptop and just let it run. It would instantly generate all possible combinations of movies and there will be some good ones. But recognizing them, that’s the hard part.
That’s the part you need humans for.
Right, the Tim Robbins movie exec character says, “I listen to stories and decide if they’ll make good movies or not.” The great majority of combinations won’t work, but every once in a while there’s one that is both new and interesting. In early AI it seemed like the testing was going to be easy. But we haven’t been able to figure out the filtering.
Can’t you write a creativity algorithm?
If you want to do variations on a theme, like Thomas Kinkade, sure. Take our movie machine. Let’s say there have been 10,000 movies — that’s 10,000 squared, or 100 million combinations of pairs of movies. We can build a classifier that would look at lots of pairs of successful movies and do some kind of inference on it so that it could learn what would be successful again. But it would be looking for patterns that are already existent. It wouldn’t be able to find that new thing that was totally out of left field. That’s what I think of as creativity — somebody comes up with something really new and clever. [Continue reading…]
We are ignoring the new machine age at our peril
John Naughton writes: As a species, we don’t seem to be very good at dealing with nonlinearity. We cope moderately well with situations and environments that are changing gradually. But sudden, major discontinuities – what some people call “tipping points” – leave us spooked. That’s why we are so perversely relaxed about climate change, for example: things are changing slowly, imperceptibly almost, but so far there hasn’t been the kind of sharp, catastrophic change that would lead us seriously to recalibrate our behaviour and attitudes.
So it is with information technology. We know – indeed, it has become a cliche – that computing power has been doubling at least every two years since records of these things began. We know that the amount of data now generated by our digital existence is expanding annually at an astonishing rate. We know that our capacity to store digital information has been increasing exponentially. And so on. What we apparently have not sussed, however, is that these various strands of technological progress are not unconnected. Quite the contrary, and therein lies our problem.
The thinker who has done most to explain the consequences of connectedness is a Belfast man named W Brian Arthur, an economist who was the youngest person ever to occupy an endowed chair at Stanford University and who in later years has been associated with the Santa Fe Institute, one of the world’s leading interdisciplinary research institutes. In 2009, he published a remarkable book, The Nature of Technology, in which he formulated a coherent theory of what technology is, how it evolves and how it spurs innovation and industry. Technology, he argued, “builds itself organically from itself” in ways that resemble chemistry or even organic life. And implicit in Arthur’s conception of technology is the idea that innovation is not linear, but what mathematicians call “combinatorial”, ie one driven by a whole bunch of things. And the significant point about combinatorial innovation is that it brings about radical discontinuities that nobody could have anticipated. [Continue reading…]
Why your employer would like to replace you with a machine
Zeynep Tufekci writes: The machine hums along, quietly scanning the slides, generating Pap smear diagnostics, just the way a college-educated, well-compensated lab technician might.
A robot with emotion-detection software interviews visitors to the United States at the border. In field tests, this eerily named “embodied avatar kiosk” does much better than humans in catching those with invalid documentation. Emotional-processing software has gotten so good that ad companies are looking into “mood-targeted” advertising, and the government of Dubai wants to use it to scan all its closed-circuit TV feeds.
Yes, the machines are getting smarter, and they’re coming for more and more jobs.
Not just low-wage jobs, either.
Today, machines can process regular spoken language and not only recognize human faces, but also read their expressions. They can classify personality types, and have started being able to carry out conversations with appropriate emotional tenor.
Machines are getting better than humans at figuring out who to hire, who’s in a mood to pay a little more for that sweater, and who needs a coupon to nudge them toward a sale. In applications around the world, software is being used to predict whether people are lying, how they feel and whom they’ll vote for.
To crack these cognitive and emotional puzzles, computers needed not only sophisticated, efficient algorithms, but also vast amounts of human-generated data, which can now be easily harvested from our digitized world. The results are dazzling. Most of what we think of as expertise, knowledge and intuition is being deconstructed and recreated as an algorithmic competency, fueled by big data.
But computers do not just replace humans in the workplace. They shift the balance of power even more in favor of employers. Our normal response to technological innovation that threatens jobs is to encourage workers to acquire more skills, or to trust that the nuances of the human mind or human attention will always be superior in crucial ways. But when machines of this capacity enter the equation, employers have even more leverage, and our standard response is not sufficient for the looming crisis. [Continue reading…]
Electric cars that run on coal
Climate Central: The world must move quickly to make electric vehicles more climate-friendly, or the world may not be able to meet its climate goals.
That’s the conclusion of a University of Toronto paper published in the March edition of Nature Climate Change, which argues that countries need to reduce the carbon intensity of their electric power supply in order to make electric transportation systems and other infrastructure an effective strategy for combating climate change.
Think about it this way: Every Nissan Leaf might run on electric power, but how that electricity was generated determines what greenhouse gas emissions the car is responsible for. If the car is charged on solar or geothermal power, the carbon footprint may be miniscule. If it’s charged on electricity generated using coal, it might prove as bad or worse for the climate than burning gasoline. (Climate Central created a road map for climate-friendly cars in 2013 showing where driving electric vehicles is most climate friendly in the U.S.)
The University of Toronto paper establishes an emissions threshold to help governments and consumers better understand whether it helps the climate to push for electric cars and the electrification of other modes of transportation based on the carbon intensity of the electricity those vehicles use. [Continue reading…]
E.O. Wilson talks about the threat to Earth’s biodiversity
Researchers announce major advance in image-recognition software
The New York Times reports: Two groups of scientists, working independently, have created artificial intelligence software capable of recognizing and describing the content of photographs and videos with far greater accuracy than ever before, sometimes even mimicking human levels of understanding.
Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain.
The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate.
The advances may make it possible to better catalog and search for the billions of images and hours of video available online, which are often poorly described and archived. At the moment, search engines like Google rely largely on written language accompanying an image or video to ascertain what it contains. [Continue reading…]
Hydrogen cars about to go on sale. Their only emission: water
The New York Times reports: Remember the hydrogen car?
A decade ago, President George W. Bush espoused the environmental promise of cars running on hydrogen, the universe’s most abundant element. “The first car driven by a child born today,” he said in his 2003 State of the Union speech, “could be powered by hydrogen, and pollution-free.”
That changed under Steven Chu, the Nobel Prize-winning physicist who was President Obama’s first Secretary of Energy. “We asked ourselves, ‘Is it likely in the next 10 or 15, 20 years that we will convert to a hydrogen-car economy?’” Dr. Chu said then. “The answer, we felt, was ‘no.’ ” The administration slashed funding for hydrogen fuel cell research.
Attention shifted to battery electric vehicles, particularly those made by the headline-grabbing Tesla Motors.
The hydrogen car, it appeared, had died. And many did not mourn its passing, particularly those who regarded the auto companies’ interest in hydrogen technology as a stunt to signal that they cared about the environment while selling millions of highly profitable gas guzzlers.
Except the companies, including General Motors, Honda, Toyota, Daimler and Hyundai, persisted.
After many years and billions of dollars of research and development, hydrogen cars are headed to the showrooms. [Continue reading…]