Space.com reports: Pluto, known for more than eight decades as just a faint, fuzzy and faraway point of light, is shaping up to be one of the most complex and diverse worlds in the solar system.
Pluto’s frigid surface varies tremendously from place to place, featuring provinces dominated by different types of ices — methane in one place, nitrogen in another and water in yet another, newly analyzed photos and measurements from NASA’s New Horizons mission reveal.
“That is unprecedented,” said New Horizons principal investigator Alan Stern, who’s based at the Southwest Research Institute in Boulder, Colorado.
“I don’t know any other place in the entirety of the outer solar system where you see anything like this,” Stern told Space.com. “The closest analogy is the Earth, where we see water-rich surfaces and rock-rich surfaces that are completely different.”
That’s just one of the new Pluto results, which are presented in a set of five New Horizons papers published online on Thursday in the journal Science. Takentogether, thefivestudies paint the Pluto system in sharp detail, shedding new light on the dwarf planet’s composition, geology and evolution over the past 4.6 billion years. [Continue reading…]
See also an infographic explaining NASA’s mission to Pluto. Continue reading →
We can get a hint of why the Go victory is important, however, by looking at the difference between the companies behind these game-playing computers. Deep Blue was the product of IBM, which was back then largely a hardware company. But the software – AlphaGo – that beat Go player Lee Sedol was created by DeepMind, a branch of Google based in the UK specialising in machine learning.
AlphaGo’s success wasn’t because of so-called “Moore’s law”, which states that computer processor speed doubles roughly every two years. Computers haven’t yet become powerful enough to calculate all the possible moves in Go – which is much harder to do than in chess. Instead, DeepMind’s work was based on carefully deploying new machine-learning methods and integrating them within more standard game-playing algorithms. Using vast amounts of data, AlphaGo has learnt how to focus its resources where they are most needed, and how to do a better job with those resources.
Melissa Dahl writes: The fire alarm goes off, and it’s apparently not a mistake or a drill: Just outside the door, smoke fills the hallway. Luckily, you happen to have a guide for such a situation: a little bot with a sign that literally reads EMERGENCY GUIDE ROBOT. But, wait — it’s taking you in the opposite direction of the way you came in, and it seems to be wanting you to go down an unfamiliar hallway. Do you trust your own instinct and escape the way you came? Or do you trust the robot?
Probably, you will blindly follow the robot, according to the findings of a fascinating new study from the Georgia Institute of Technology. In an emergency situation — a fake one, though the test subjects didn’t know that — most people trusted the robot over their own instincts, even when the robot had showed earlier signs of malfunctioning. It’s a new wrinkle for researchers who study trust in human-robot interactions. Previously, this work had been focused on getting people to trust robotics, such as Google’s driverless cars. Now this new research hints at another problem: How do you stop people from trusting robots too much? It’s a timely question, especially considering the news this week of the first crash caused by one of Google’s self-driving cars. [Continue reading…]
The light we view through it has spent hundreds, millions, even billions of years crossing the vastness of space to reach us, carrying with it images of things that happened long ago.
On Thursday, astronomers at the Hubble Space Telescope announced that they’d seen back farther than they ever have before, to a galaxy 13.4 billion light years away in a time when the universe was just past its infancy.
The finding shattered what’s known as the “cosmic distance record,” illuminating a point in time that scientists once thought could never be seen with current technology.
“We’ve taken a major step back in time, beyond what we’d ever expected to be able to do with Hubble,” Yale University astrophysicist Pascal Oesch, the lead author of the study, said in a statement. [Continue reading…]
Rolling Stone reports: “Welcome to robot nursery school,” Pieter Abbeel says as he opens the door to the Robot Learning Lab on the seventh floor of a sleek new building on the northern edge of the UC-Berkeley campus. The lab is chaotic: bikes leaning against the wall, a dozen or so grad students in disorganized cubicles, whiteboards covered with indecipherable equations. Abbeel, 38, is a thin, wiry guy, dressed in jeans and a stretched-out T-shirt. He moved to the U.S. from Belgium in 2000 to get a Ph.D. in computer science at Stanford and is now one of the world’s foremost experts in understanding the challenge of teaching robots to think intelligently. But first, he has to teach them to “think” at all. “That’s why we call this nursery school,” he jokes. He introduces me to Brett, a six-foot-tall humanoid robot made by Willow Garage, a high-profile Silicon Valley robotics manufacturer that is now out of business. The lab acquired the robot several years ago to experiment with. Brett, which stands for “Berkeley robot for the elimination of tedious tasks,” is a friendly-looking creature with a big, flat head and widely spaced cameras for eyes, a chunky torso, two arms with grippers for hands and wheels for feet. At the moment, Brett is off-duty and stands in the center of the lab with the mysterious, quiet grace of an unplugged robot. On the floor nearby is a box of toys that Abbeel and the students teach Brett to play with: a wooden hammer, a plastic toy airplane, some giant Lego blocks. Brett is only one of many robots in the lab. In another cubicle, a nameless 18-inch-tall robot hangs from a sling on the back of a chair. Down in the basement is an industrial robot that plays in the equivalent of a robot sandbox for hours every day, just to see what it can teach itself. Across the street in another Berkeley lab, a surgical robot is learning how to stitch up human flesh, while a graduate student teaches drones to pilot themselves intelligently around objects. “We don’t want to have drones crashing into things and falling out of the sky,” Abbeel says. “We’re trying to teach them to see.”
Industrial robots have long been programmed with specific tasks: Move arm six inches to the left, grab module, twist to the right, insert module into PC board. Repeat 300 times each hour. These machines are as dumb as lawn mowers. But in recent years, breakthroughs in machine learning – algorithms that roughly mimic the human brain and allow machines to learn things for themselves – have given computers a remarkable ability to recognize speech and identify visual patterns. Abbeel’s goal is to imbue robots with a kind of general intelligence – a way of understanding the world so they can learn to complete tasks on their own. He has a long way to go. “Robots don’t even have the learning capabilities of a two-year-old,” he says. For example, Brett has learned to do simple tasks, such as tying a knot or folding laundry. Things that are simple for humans, such as recognizing that a crumpled ball of fabric on a table is in fact a towel, are surprisingly difficult for a robot, in part because a robot has no common sense, no memory of earlier attempts at towel-folding and, most important, no concept of what a towel is. All it sees is a wad of color. [Continue reading…]
The Guardian reports: A US government agency says it has attained the “holy grail” of energy – the next-generation system of battery storage, that has has been hotly pursued by the likes of Bill Gates and Elon Musk.
Advanced Research Projects Agency-Energy (Arpa-E) – a branch of the Department of Energy – says it achieved its breakthrough technology in seven years.
Ellen Williams, Arpa-E’s director, said: “I think we have reached some holy grails in batteries – just in the sense of demonstrating that we can create a totally new approach to battery technology, make it work, make it commercially viable, and get it out there to let it do its thing,”
If that’s the case, Arpa-E has come out ahead of Gates and Musk in the multi-billion-dollar race to build the next generation battery for power companies and home storage.
Arpa-E was founded in 2009 under Barack Obama’s economic recovery plan to fund early stage research into the generation and storage of energy.
Such projects, or so-called moonshots, were widely seen as too risky for regular investors, but – if they succeed – could potentially be game-changing. [Continue reading…]
Jonathan Zdziarski, an expert in iOS forensics, writes: For years, the government could come to Apple with a subpoena and a phone, and have the manufacturer provide a disk image of the device. This largely worked because Apple didn’t have to hack into their phones to do this. Up until iOS 8, the encryption Apple chose to use in their design was easily reversible when you had code execution on the phone (which Apple does). So all through iOS 7, Apple only needed to insert the key into the safe and provide FBI with a copy of the data.
This service worked like a “black box”, and while Apple may have needed to explain their methods in court at some point, they were more likely considered a neutral third party lab as most forensics companies would be if you sent them a DNA sample. The level of validation and accountability here is relatively low, and methods can often be opaque; that is, Apple could simply claim that the tech involved was a trade secret, and gotten off without much more than an explanation. An engineer at Apple could hack up a quick and dirty tool to dump disk, and nobody would need to ever see it because they were providing a lab service and were considered more or less trade secrets.
Now lets contrast that history with what FBI and the courts are ordering Apple to do here. FBI could have come to Apple with a court order stating they must brute force the PIN on the phone and deliver the contents. It would have been difficult to get a judge to sign off on that, since this quite boldly exceeds the notion of “reasonable assistance” to hack into your own devices. No, to slide this by, FBI was more clever. They requested that Apple developed a forensics tool but did not do the actual brute force themselves. [Continue reading…]
SF Gate reports: There is a chance your cell phone could set your head on fire, according to science (and the New York Police Department).
Depending on how intimate you are with your phone, those who charge their smartphones under their pillow while they sleep could find themselves fanning out the flames of their smothered device.
In an ominous and unexplained public service announcement, the 33rd Precinct in New York’s Washington Heights sent out the following tweet Monday: [Continue reading…]
Don't put your cellphone under a pillow when sleeping or when charging your device.Please share this tip and b safe! pic.twitter.com/uwD3PXgVQf
John Quiggin writes: For most of us, the industrial economy is a thing of the past. In the entire United States, large factories employ fewer than 2 million people. Even adding China to the picture does not change things much. And yet the conceptual categories of the 20th century still dominate our thinking. We remain fixated on the industrial model of economic growth, where ‘growth’ means ‘more of everything’, and we can express our rate of development in a single number. This model leads naturally to the conclusion that economic expansion must eventually run up against constraints on the availability of natural resources, such as trees to make paper.
And yet in 2013, despite positive growth overall, the world reached ‘Peak Paper’: global paper production and consumption reached its maximum, flattened out, and is now falling. A prediction that was over-hyped in the 20th century and then derided in the early 2000s – namely, the Paperless Office – is finally being realised. Growth continues, but paper is in retreat. Why did this seem so unlikely only a decade ago?
The problem is a standard assumption of macroeconomics – namely, that all sectors of the economy expand at a roughly equal rate. If this ‘fixed proportions’ assumption does not hold, the theory used to construct GDP numbers ceases to work, and the concept of a ‘rate of growth’ is no longer meaningful. Until the end of the 20th century, these assumptions did in fact work reasonably well for paper, books and newspapers. The volume of information increased somewhat more rapidly than the economy as a whole, but not so rapidly as to undermine the notion of an overall rate of economic growth. The volume of printed matter grew steadily, to around a million new books every year, and the demand for paper for printing grew in line with demand for books. [Continue reading…]
Susie Neilson writes: In 2005, the taxonomist Quentin Wheeler named a trio of newly discovered slime-mold beetles after George W. Bush, Donald Rumsfeld, and Dick Cheney. He believed the names could increase public interest in the discovery and classification of new species, and help combat the quickening pace of extinction. (Species go extinct three times faster than we can name them.)
He knew he was onto something when, having received a call from the White House, it was Bush on the other end, thanking him for the honor. Wheeler, now the president of SUNY’s College of Environmental Science and Forestry, began attributing all sorts of provocative names to his bugs, including Darth Vader, Stephen Colbert, Roy and Barbara Orbison, Pocahontas, Hernan Cortez, and the Aztecs — he has even named 6 species after himself. Youcan call his strategy “shameless self-promotion” — Wheeler already has.
Nautilus spoke with Wheeler about his work.
What’s exciting about taxonomy?
It is the one field with the audacity to create a living inventory of every living thing on the entire planet and reconstruct the history of the diversity of life. Who else would tackle 12 million species in 3.8 billion years on the entire surface of the planet? If that isn’t real science, I don’t know what is. It infuriates me that taxonomy is marginalized as a bookkeeping activity, when in fact it has the most audacious research agenda of any biological science. [Continue reading…]
Jacob Weisberg writes: “As smoking gives us something to do with our hands when we aren’t using them, Time gives us something to do with our minds when we aren’t thinking,” Dwight Macdonald wrote in 1957. With smartphones, the issue never arises. Hands and mind are continuously occupied texting, e-mailing, liking, tweeting, watching YouTube videos, and playing Candy Crush.
Americans spend an average of five and a half hours a day with digital media, more than half of that time on mobile devices, according to the research firm eMarketer. Among some groups, the numbers range much higher. In one recent survey, female students at Baylor University reported using their cell phones an average of ten hours a day. Three quarters of eighteen-to-twenty-four-year-olds say that they reach for their phones immediately upon waking up in the morning. Once out of bed, we check our phones 221 times a day — an average of every 4.3 minutes — according to a UK study. This number actually may be too low, since people tend to underestimate their own mobile usage. In a 2015 Gallup survey, 61 percent of people said they checked their phones less frequently than others they knew.
Our transformation into device people has happened with unprecedented suddenness. The first touchscreen-operated iPhones went on sale in June 2007, followed by the first Android-powered phones the following year. Smartphones went from 10 percent to 40 percent market penetration faster than any other consumer technology in history. In the United States, adoption hit 50 percent only three years ago. Yet today, not carrying a smartphone indicates eccentricity, social marginalization, or old age.
What does it mean to shift overnight from a society in which people walk down the street looking around to one in which people walk down the street looking at machines? [Continue reading…]
As one of those eccentric, socially marginalized but not quite old aged people without a smartphone, it means I now live in a world where it seems the mass of humanity has become myopic.
A driver remains stationary in front of a green light.
A couple sit next to each other in an airport, wrapped in silence with attention directed elsewhere down their mutually exclusive wormholes.
A jogger in the woods, hears no birdsong because his ears are stuffed with plastic buds delivering private tunes.
Amidst all this divided attention, one thing seems abundantly clearly: devices tap into and amplify the desire to be some place else.
To be confined to the present place and the present time is to be trapped in a prison cell from which the smartphone offers escape — though of course it doesn’t.
What it does is produce an itch in time; a restless sense that we don’t have enough — that an elusive missing something might soon appear on that mesmerizing little touchscreen.
The effect of this refusal to be where we are is to impoverish life as our effort to make it larger ends up doing the reverse.
Scientific American: More than 100 years ago, American sociologist W.E.B. Du Bois was concerned that race was being used as a biological explanation for what he understood to be social and cultural differences between different populations of people. He spoke out against the idea of “white” and “black” as discrete groups, claiming that these distinctions ignored the scope of human diversity.
Science would favor Du Bois. Today, the mainstream belief among scientists is that race is a social construct without biological meaning. And yet, you might still open a study on genetics in a major scientific journal and find categories like “white” and “black” being used as biological variables.
In an article published today (Feb. 4) in the journal Science, four scholars say racial categories are weak proxies for genetic diversity and need to be phased out. [Continue reading…]
Ancient records tell us that the intrepid Viking seafarers who discovered Iceland, Greenland and eventually North America navigated using landmarks, birds and whales, and little else. There’s little doubt that Viking sailors would also have used the positions of stars at night and the sun during the daytime, and archaeologists have discovered what appears to be a kind of Viking navigational sundial. But without magnetic compasses, like all ancient sailors they would have struggled to find their way once the clouds came over.
However, there are also several reports in Nordic sagas and other sources of a sólarsteinn “sunstone”. The literature doesn’t say what this was used for but it has sparked decades of research examining if this might be a reference to a more intriguing form of navigational tool.
The idea is that the Vikings may have used the interaction of sunlight with particular types of crystal to create a navigational aid that may even have worked in overcast conditions. This would mean the Vikings had discovered the basic principles of measuring polarised light centuries before they were explained scientifically and which are today used to identify and measure different chemicals. Scientists are now getting closer to establishing if this form of navigation would have been possible, or if it is just a fanciful theory.
Pacific Standard reports: Trapezoids are, oddly enough, fundamental to modern science. When European scientists used them to simplify certain astronomical calculations in the 14th century, it was an important first step toward calculus—the mathematics Isaac Newton and Gottfried Leibniz developed to understand the physics of astronomical objects like planets. In other words, trapezoids are important, and we’ve known this for nearly 700 years.
Well, the Babylonians knew all of that 14 centuries earlier, according to new research published in Science, proving once again that ancient societies were way more advanced than we’d like to think. [Continue reading…]
Science magazine reports: The solar system appears to have a new ninth planet. Today, two scientists announced evidence that a body nearly the size of Neptune — but as yet unseen — orbits the sun every 15,000 years. During the solar system’s infancy 4.5 billion years ago, they say, the giant planet was knocked out of the planet-forming region near the sun. Slowed down by gas, the planet settled into a distant elliptical orbit, where it still lurks today.
The claim is the strongest yet in the centuries-long search for a “Planet X” beyond Neptune. The quest has been plagued by far-fetched claims and even outright quackery. But the new evidence comes from a pair of respected planetary scientists, Konstantin Batygin and Mike Brown of the California Institute of Technology (Caltech) in Pasadena, who prepared for the inevitable skepticism with detailed analyses of the orbits of other distant objects and months of computer simulations. “If you say, ‘We have evidence for Planet X,’ almost any astronomer will say, ‘This again? These guys are clearly crazy.’ I would, too,” Brown says. “Why is this different? This is different because this time we’re right.”
Outside scientists say their calculations stack up and express a mixture of caution and excitement about the result. “I could not imagine a bigger deal if — and of course that’s a boldface ‘if’ — if it turns out to be right,” says Gregory Laughlin, a planetary scientist at the University of California (UC), Santa Cruz. “What’s thrilling about it is [the planet] is detectable.” [Continue reading…]
The Guardian reports: The human race faces one its most dangerous centuries yet as progress in science and technology becomes an ever greater threat to our existence, Stephen Hawking warns.
The chances of disaster on planet Earth will rise to a near certainty in the next one to ten thousand years, the eminent cosmologist said, but it will take more than a century to set up colonies in space where human beings could live on among the stars.
“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said. His comments echo those of Lord Rees, the astronomer royal, who raised his own concerns about the risks of self-annihilation in his 2003 book Our Final Century.
Speaking to the Radio Times ahead of the BBC Reith Lecture, in which he will explain the science of black holes, Hawking said most of the threats humans now face come from advances in science and technology, such as nuclear weapons and genetically engineered viruses. [Continue reading…]
Paul La Farge writes: In A History of Reading, the Canadian novelist and essayist Alberto Manguel describes a remarkable transformation of human consciousness, which took place around the 10th century A.D.: the advent of silent reading. Human beings have been reading for thousands of years, but in antiquity, the normal thing was to read aloud. When Augustine (the future St. Augustine) went to see his teacher, Ambrose, in Milan, in 384 A.D., he was stunned to see him looking at a book and not saying anything. With the advent of silent reading, Manguel writes,
… the reader was at last able to establish an unrestricted relationship with the book and the words. The words no longer needed to occupy the time required to pronounce them. They could exist in interior space, rushing on or barely begun, fully deciphered or only half-said, while the reader’s thoughts inspected them at leisure, drawing new notions from them, allowing comparisons from memory or from other books left open for simultaneous perusal.
To read silently is to free your mind to reflect, to remember, to question and compare. The cognitive scientist Maryanne Wolf calls this freedom “the secret gift of time to think”: When the reading brain becomes able to process written symbols automatically, the thinking brain, the I, has time to go beyond those symbols, to develop itself and the culture in which it lives.
A thousand years later, critics fear that digital technology has put this gift in peril. The Internet’s flood of information, together with the distractions of social media, threaten to overwhelm the interior space of reading, stranding us in what the journalist Nicholas Carr has called “the shallows,” a frenzied flitting from one fact to the next. In Carr’s view, the “endless, mesmerizing buzz” of the Internet imperils our very being: “One of the greatest dangers we face,” he writes, “as we automate the work of our minds, as we cede control over the flow of our thoughts and memories to a powerful electronic system, is … a slow erosion of our humanness and our humanity.”
There’s no question that digital technology presents challenges to the reading brain, but, seen from a historical perspective, these look like differences of degree, rather than of kind. To the extent that digital reading represents something new, its potential cuts both ways. Done badly (which is to say, done cynically), the Internet reduces us to mindless clickers, racing numbly to the bottom of a bottomless feed; but done well, it has the potential to expand and augment the very contemplative space that we have prized in ourselves ever since we learned to read without moving our lips. [Continue reading…]
Following Krauss’s tweet in September, LIGO spokesperson Gabriela González, a physicist at Louisiana State University in Baton Rouge, told Davide Castelvecchi she was “upset at the possibility that someone in the LIGO team might have initiated the rumour, although Krauss and other researchers told me [DC] that they did not hear it directly from members of the LIGO collaboration. ‘I give it a 10–15% likelihood of being right,’ says Krauss, who works at Arizona State University in Tempe.”
Krauss has now boosted his confidence level to 60% — a surprisingly high level given that he says this:
“I don’t know if the rumour is solid,” Krauss told the Guardian. “If I don’t hear anything in the next two months, I’ll conclude it was false.”
González now tells Ian Sample at The Guardian:
“The LIGO instruments are still taking data today, and it takes us time to analyse, interpret and review results, so we don’t have any results to share yet.
“We take pride in reviewing our results carefully before submitting them for publication – and for important results, we plan to ask for our papers to be peer-reviewed before we announce the results – that takes time too!” she said.
At this point, it seems like the story might reveal more about Lawrence Krauss than it says about gravitational waves.
What makes Krauss’s excitement so uncontainable when the news will definitely come out — if and when there is news — without his help?
Scientists have a duty to fulfill a role as public educators and there has never before been a time when this need has been greater. To a degree this is an evangelical role, but as with every other individual who assumes such a position, each is at risk of becoming intoxicated by the reverential respect they receive from their audience as message and messenger become intertwined.
This may then lead to an over-extension of authority — exactly what Krauss and fellow scientists who dub themselves antitheists are guilty of when they make pronouncements about religion.
More than 200 far-right extremists have been arrested after they went on a rampage during a xenophobic rally in the German city of Leipzig, setting cars on fire and smashing windows.
Many of the extremists were already known to police as football hooligans and wrought chaos on Monday in an area known to be left-leaning, while thousands of supporters of the anti-migrant Pegida movement held an anti-refugee demonstration elsewhere in the city, authorities said.
A total of 211 arrests were made after the Connewitz district of the eastern city was attacked, police confirmed.
Are we to view this as a modern-day crusade in which German Christians purge their fatherland of the invading Muslim hordes?
On the contrary, I doubt very much that many (or perhaps even any) of those involved would be particularly ardent in expressing any religious faith. What is likely beyond doubt is that they were all white.
Xenophobia is generally a form of racism and the xenophobes don’t close ranks on the basis of theological quizzing — they can identify their cohorts and their enemies simply through the color of their skin.
When religion and racism intermingle, the underpinning of the racism is much less likely to be found in religious doctrine itself than it is on prevalent affiliations based on racial, national and cultural identity.
If as they claim, the antitheists want to rescue humanity from religion because of its irrationality, why focus on religion alone? There are many other forms of irrational behavior that are equally if not more destructive.
For instance, the religion in modernity which through advertising relentlessly promotes more widespread and unquestioning faith than that found in any conventional religion, is consumerism: the belief that the acquisition of material goods is the key to human happiness.
You are what you own — I know of no other idea that is more irrational and yet holds such a firm grip on so much of humanity.
This religion has grown more rapidly and more extensively than any other in human history and in the process now jeopardizes the future of life on Earth.
In terms of doctrine, most conventional religions oppose materialism. As the Bible says:
Do not store up for yourselves treasures on earth, where moth and rust destroy, and where thieves break in and steal. But store up for yourselves treasures in heaven, where neither moth nor rust destroys, and where thieves do not break in or steal.
The antitheists are going to say this is a bad investment because heaven doesn’t exist, but in doing so they devalue the ecological wisdom contained in such religious efforts to rein in human avarice.
The core criticism of religion is directed at its appeal to beliefs that have no empirical foundation and yet what’s strange about focusing on doctrine is that it glosses over the gulf between belief and practice.
Arguably, the destructive impact of religion derives mostly from the fact that so many believers fail to practice what they profess. They situate the locus of meaning in the wrong place by thinking, this is who I am, rather than this is how I live. In so doing, they inhabit identity traps: static forms of self-definition that obscure the dynamic and interactive nature of human experience.
On this issue, Lawrence Krauss and others could learn a lot from Neil deGrasse Tyson:
This website or its third-party tools use cookies, which are necessary to its functioning. By closing this banner, you agree to the use of cookies.Ok