Friday, March 29, 2019

The age of the Internet grants anyone who can access it an opportunity to learn a great deal about virtually anything in a short amount of time. Unfortunately, the published information can be incorrect. Anyone who Googles "The John Birch Society" will see this problem demonstrated almost immediately. From the slanted view given of the Society at Wikipedia to the crazed rantings of detractors, much of the information provided is either distorted or downright false. Thus, we offer this page where we seek to set the record straight.

Myth: The JBS is a radical organization full of right-wing extremists.-
Fact: The JBS is dedicated to restoring the Republic according to the vision of the Founding Fathers: limited government, individual liberty, and the rule of law. Along with America's Founders, we believe that governments are instituted to protect individual rights and liberties, and are not formed to provide for the wants of individuals. To label JBS radical or extreme for agreeing with our nation's Founders is to place that same label on them.
Myth: The JBS message is hate-filled.-
Fact: There never has been any hate in our agenda and it will never be employed as a tactic. From the outset, membership in JBS has been strictly denied to haters and, should any member adopt a racist or anti-Semitic attitude or behavior, the membership of such a person will be permanently revoked.
Myth: The JBS Founder Robert Welch called President Dwight Eisenhower a Communist.-
Fact: Originally detailing some of Pres. Eisenhower's history in a 1954 letter sent privately to a few friends, Mr. Welch's research grew over several years into a full-length book entitled The Politician (1963). Once the book was published, its very existence was ignored while critics continued to dwell on only one of several possible conclusions offered by Mr. Welch.The book provides 300 pages and 150 pages of footnotes and documentation, including covering one of Mr. Eisenhower's most immoral and despicable acts of authorizing "Operation Keelhaul"; which used American soldiers to repatriate anti-communist Poles to their certain death or torture. Read the book for yourself and discover what Mr. Welch did say and learn the role played by Mr. Eisenhower over his many years as one of our nation's military and political leaders.
Myth: The JBS considers public water fluoridation part of a Communist mind-control plot.-
Fact: While the JBS doesn't agree with water fluoridation because it is a form of government mass medication of citizens in violation of their individual right to choose which medicines they ingest, it was never opposed as a mind-control plot. If citizens want to add fluoride to their diet or daily routine, there are plentiful opportunities for them to do so. It’s a choice they should make, not their local government. Furthermore, opposition to fluoridation was never a major action item of any JBS campaign.
Myth: The JBS is nothing more than a group of conspiracy theorists.-
Fact: The John Birch Society reports on those that create and influence public policy and the motivations behind their actions. JBS directs members to counter unconstitutional actions through peaceful, educational means, including supporting or blocking legislation, setting up relationships with key elected officials and local leaders, and holding elected officials accountable to their oath of office. By definition, a conspiracy exists when two or more persons work secretly for an evil or unlawful purpose. Given the state that America is in today, one could argue that an unconstitutional agenda is no longer secret, but in the open for all to see. Those that continue to work against the Constitution do so brazenly, continuing to make promises and entitlements to citizens that the country cannot afford while committing future generations to crushing debt and ever decreasing prosperity at the expense of liberty.
Myth: The JBS was booted out of the conservative movement by William F. Buckley.-
Fact: In the mid-1950s on more than one occasion, John Birch Society Founder Robert Welch financially helped an up-and-coming conservative leader, and recommended that others do the same, so this rising young star could get his new magazine off the ground. That newcomer was William F. Buckley and his magazine was National Review. A few short years later, Mr. Buckley attacked Robert Welch in a lengthy article in his magazine. Over the past several decades, Buckley carried out a campaign of attacking or disparaging Welch and the Society. On numerous occasions, he boasted to friends that he intended to destroy The John Birch Society. He didn't succeed. Read more in John McManus' book, William F. Buckley: Pied Piper for the Establishment.
Myth: The JBS is against civil rights because it opposed several Civil Rights acts.-
Fact: Correcting civil rights abuses that do exist should be accomplished at the state and local level, something The John Birch Society members - of all races, colors and ethnic backgrounds - have always supported. Civil rights legislation should have come from the states and the communities rather than being used as a steppingstone toward our present-day out-of-control federal government.
Myth: The JBS is nothing more than controlled opposition, pretending to be a friend to the cause of liberty. Robert Welch sold his candy company to the leftist, internationalist Rockefellers.-
Fact: Robert Welch was out of the candy manufacturing business (retiring in 1956) when his brother (for whom he used to work) sold the James O. Welch Candy company to Nabisco in 1963. JBS has never been funded by any Rockefeller money. Nelson Rockefeller publicly attacked JBS, and JBS has exposed the Rockefeller support for the United Nations and its goal of a new world order more than any other organization.
Myth: The John Birch Society played a role in the assassination of President Kennedy.-
Fact: This is perhaps the most despicable myth. The truth is that The John Birch Society has always lived by the age-old adage that foul means can never be employed to accomplish a goal, no matter how important that goal. While JBS and its members called attention to the many dangerous and unconstitutional acts and programs promoted by President Kennedy, it has always been the Society’s position that anything harmful to our country emanating from the White House should be countered by congressional or judicial action urged upon our nation’s leaders by concerned American citizens. Immediately after the assassination, founder Robert Welch canceled the “For God and Country” rally that thousands had committed to attend in Boston the following day. He then sent a telegram of condolences to Mrs. Kennedy. In that brief message, published by the Boston Globe on November 23, 1963, Robert Welch stated: “On behalf of the Council of the John Birch Society and myself, I wish to express our deep sorrow at the untimely loss to our nation of its youngest elected President and to convey more particularly to you and all members of President Kennedy’s family our sincere and heartfelt sympathy in your overwhelming personal loss.

Saturday, March 02, 2019

China starting World War III very soon!

You Will Lose Your Job to a Robot—and Sooner Than You Think

 want to tell you straight off what this story is about: Sometime in the next 40 years, robots are going to take your job.
I don’t care what your job is. If you dig ditches, a robot will dig them better. If you’re a magazine writer, a robot will write your articles better. If you’re a doctor, IBM’s Watson will no longer “assist” you in finding the right diagnosis from its database of millions of case studies and journal articles. It will just be a better doctor than you.
And CEOs? Sorry. Robots will run companies better than you do. Artistic types? Robots will paint and write and sculpt better than you. Think you have social skills that no robot can match? Yes, they can. Within 20 years, maybe half of you will be out of jobs. A couple of decades after that, most of the rest of you will be out of jobs.
In one sense, this all sounds great. Let the robots have the damn jobs! No more dragging yourself out of bed at 6 a.m. or spending long days on your feet. We’ll be free to read or write poetry or play video games or whatever we want to do. And a century from now, this is most likely how things will turn out. Humanity will enter a golden age.
But what about 20 years from now? Or 30? We won’t all be out of jobs by then, but a lot of us will—and it will be no golden age. Until we figure out how to fairly distribute the fruits of robot labor, it will be an era of mass joblessness and mass poverty. Working-class job losses played a big role in the 2016 election, and if we don’t want a long succession of demagogues blustering their way into office because machines are taking away people’s livelihoods, this needs to change, and fast. Along with global warming, the transition to a workless future is the biggest challenge by far that progressive politics—not to mention all of humanity—faces. And yet it’s barely on our radar.

We Already Have a Solution for the Robot Apocalypse. It’s 200 Years Old.
That’s kind of a buzzkill, isn’t it? Luckily, it’s traditional that stories about difficult or technical subjects open with an entertaining or provocative anecdote. The idea is that this allows readers to ease slowly into daunting material. So here’s one for you: Last year at Christmas, I was over at my mother’s house and mentioned that I had recently read an article about Google Translate. It turns out that a few weeks previously, without telling anyone, Google had switched over to a new machine-learning algorithm. Almost overnight, the quality of its translations skyrocketed. I had noticed some improvement myself but had chalked it up to the usual incremental progress these kinds of things go through. I hadn’t realized it was due to a quantum leap in software.
But if Google’s translation algorithm was better, did that mean its voice recognition was better too? And its ability to answer queries? Hmm. How could we test that? We decided to open presents instead of cogitating over this.
But after that was over, the subject of erasers somehow came up. Which ones are best? Clear? Black? Traditional pink? Come to think of it, why are erasers traditionally pink? “I’ll ask Google!” I told everyone. So I pulled out my phone and said, “Why are erasers pink?” Half a second later, Google told me.
Roberto Parada
Not impressed? You should be. We all know that phones can recognize voices tolerably well these days. And we know they can find the nearest caf√© or the trendiest recipe for coq au vin. But what about something entirely random? And not a simple who, where, or when question. This was a why question, and it wasn’t about why the singer Pink uses erasers or why erasers are jinxed. Google has to be smart enough to figure out in context that I said pink and that I’m asking about the historical reason for the color of erasers, not their health or the way they’re shaped. And it did. In less than a second. With nothing more than a cheap little microprocessor and a slow link to the internet.
(In case you’re curious, Google got the answer from Design*Sponge: “The eraser was originally produced by the Eberhard Faber Company…The erasers featured pumice, a volcanic ash from Italy that gave them their abrasive quality, along with their distinctive color and smell.”)
Still not impressed? When Watson famously won a round of Jeopardy! against the two best human players of all time, it needed a computer the size of a bedroom to answer questions like this. That was only seven years ago.
What do pink erasers have to do with the fact that we’re all going to be out of a job in a few decades? Consider: Last October, an Uber trucking subsidiary named Otto delivered 2,000 cases of Budweiser 120 miles from Fort Collins, Colorado, to Colorado Springs—without a driver at the wheel. Within a few years, this technology will go from prototype to full production, and that means millions of truck drivers will be out of a job.
Automated trucking doesn’t rely on newfangled machines, like the powered looms and steam shovels that drove the Industrial Revolution of the 19th century. Instead, like Google’s ability to recognize spoken words and answer questions, self-driving trucks—and cars and buses and ships—rely primarily on software that mimics human intelligence. By now everyone’s heard the predictions that self-driving cars could lead to 5 million jobs being lost, but few people understand that once artificial-intelligence software is good enough to drive a car, it will be good enough to do a lot of other things too. It won’t be millions of people out of work; it will be tens of millions.
This is what we mean when we talk about “robots.” We’re talking about cognitive abilities, not the fact that they’re made of metal instead of flesh and powered by electricity instead of chicken nuggets.
In other words, the advances to focus on aren’t those in robotic engineering—though they are happening, too—but the way we’re hurtling toward artificial intelligence, or AI. While we’re nowhere near human-level AI yet, the progress of the past couple of decades has been stunning. After many years of nothing much happening, suddenly robots can play chess better than the best grandmaster. They can play Jeopardy! better than the best humans. They can drive cars around San Francisco—and they’re getting better at it every year. They can recognize faces well enough that Welsh police recently made the first-ever arrest in the United Kingdom using facial recognition software. After years of plodding progress in voice recognition, Google announced earlier this year that it had reduced its word error rate from 8.5 percent to 4.9 percent in 10 months.
All of this is a sign that AI is improving exponentially, a product of both better computer hardware and software. Hardware has historically followed a growth curve called Moore’s law, in which power and efficiency double every couple of years, and recent improvements in software algorithms have been even more explosive. For a long time, these advances didn’t seem very impressive: Going from the brainpower of a bacterium to the brainpower of a nematode might technically represent an enormous leap, but on a practical level it doesn’t get us that much closer to true artificial intelligence. However, if you keep up the doubling for a while, eventually one of those doubling cycles takes you from the brainpower of a lizard (who cares?) to the brainpower of a mouse and then a monkey (wow!). Once that happens, human-level AI is just a short step away.
This can be hard to imagine, so here’s a chart that shows what an exponential doubling curve looks like, measured in petaflops (quadrillions of calculations per second). During the first 70 years of the digital era, computing power doubled every couple of years—and that produced steadily improving accounting software, airplane reservation systems, weather forecasts, Spotify, and the like. But on the scale of the human brain—usually estimated at 10 to 50 petaflops—it produced computing power so minuscule that you can’t see any change at all. Around 2025 we’ll finally start to see visible progress toward artificial intelligence. A decade later we’ll be up to about one-tenth the power of a human brain, and a decade after that we’ll have full human-level AI. It will seem like it happened overnight, but it’s really the result of a century of steady—but mostly imperceptible—progress.
Are we really this close to true AI? Here’s a yardstick to think about. Even with all this doubling going on, until recently computer scientists thought we were still years away from machines being able to win at the ancient game of Go, usually regarded as the most complex human game in existence. But last year, a computer beat a Korean grandmaster considered one of the best of all time, and earlier this year it beat the highest-ranked Go player in the world. Far from slowing down, progress in artificial intelligence is now outstripping even the wildest hopes of the most dedicated AI cheerleaders. Unfortunately, for those of us worried about robots taking away our jobs, these advances mean that mass unemployment is a lot closer than we feared—so close, in fact, that it may be starting already. But you’d never know that from the virtual silence about solutions in policy and political circles.
I’m hardly alone in thinking we’re on the verge of an AI Revolution. Many who work in the software industry—people like Bill Gates and Elon Musk—have been sounding the alarm for years. But their concerns are largely ignored by policymakers and, until recently, often ridiculed by writers tasked with interpreting technology or economics. So let’s take a look at some of the most common doubts of the AI skeptics.
#1: We’ll never get true AI because computing power won’t keep doubling forever. We’re going to hit the limits of physics before long. There are several pretty good reasons to dismiss this claim as a roadblock. To start, hardware designers will invent faster, more specialized chips. Google, for example, announced last spring that it had created a microchip called a Tensor Processing Unit, which it claimed was up to 30 times faster and 80 times more power efficient than an Intel processor for machine learning tasks. A huge array of those chips are now available to researchers who use Google’s cloud services. Other chips specialized for specific aspects of AI (image recognition, neural networking, language processing, etc.) either exist already or are certain to follow.
What’s more, this raw power is increasingly being harnessed in a manner similar to the way the human brain works. Your brain is not a single, superpowerful computing device. It’s made up of about 100 billion neurons working in parallel—i.e., all at the same time—to create human-level intelligence and consciousness. At the lowest level, neurons operate in parallel to create small clusters that perform semi-independent actions like responding to a specific environmental cue. At the next level, dozens of these clusters work together in each of about 100 “sub-brains”—distinct organs within the brain that perform specialized jobs such as speech, visual processing, and balance. Finally, all these sub-brains operate in parallel, and the resulting overall state is monitored and managed by executive functions that make sense of the world and provide us with our feeling that we have conscious control of our actions.
Modern computers also yoke lots of microprocessors together. As of 2017, the fastest computer in the world uses roughly 40,000 processors with 260 cores each. That’s more than 10 million processing cores running in parallel. Each one of these cores has less power than the Intel processor on your desktop, but the entire machine delivers about the same power as the human brain.
This doesn’t mean AI is here already. Far from it. This “massively parallel” architecture still presents enormous programming challenges, but as we get better at exploiting it we’re certain to make frequent breakthroughs in software performance. In other words, even if Moore’s law slows down or stops, the total power of everything put together—more use of custom microchips, more parallelism, more sophisticated software, and even the possibility of entirely new ways of doing computing—will almost certainly keep growing for many more years.
#2: Even if computing power keeps doubling, it has already been doubling for decades. You guys keep predicting full-on AI, but it never happens. It’s true that during the early years of computing there was a lot of naive optimism about how quickly we’d be able to build intelligent machines. But those rosy predictions died in the ’70s, as computer scientists came to realize that even the fastest mainframes of the day produced only about a billionth of the processing power of the human brain. It was a humbling realization, and the entire field has been almost painfully realistic about its progress ever since.
We’ve finally built computers with roughly the raw processing power of the human brain—although only at a cost of more than $100 million and with an internal architecture that may or may not work well for emulating the human mind. But in another 10 years, this level of power will likely be available for less than $1 million, and thousands of teams will be testing AI software on a platform that’s actually capable of competing with humans.
#3: Okay, maybe we will get full AI. But it only means that robots will act intelligent, not that they’ll really be intelligent. This is just a tedious philosophical debating point. For the purposes of employment, we don’t really care if a smart computer has a soul—or if it can feel love and pain and loyalty. We only care if it can act like a human being well enough to do anything we can do. When that day comes, we’ll all be out of jobs even if the computers taking our places aren’t “really” intelligent.
#4: Fine. But waves of automation—steam engines, electricity, computers—always lead to predictions of mass unemployment. Instead they just make us more efficient. The AI Revolution will be no different. This is a popular argument. It’s also catastrophically wrong.
The Industrial Revolution was all about mechanical power: Trains were more powerful than horses, and mechanical looms were more efficient than human muscle. At first, this did put people out of work: Those loom-smashing weavers in Yorkshire—the original Luddites—really did lose their livelihoods. This caused massive social upheaval for decades until the entire economy adapted to the machine age. When that finally happened, there were as many jobs tending the new machines as there used to be doing manual labor. The eventual result was a huge increase in productivity: A single person could churn out a lot more cloth than she could before. In the end, not only were as many people still employed, but they were employed at jobs tending machines that produced vastly more wealth than anyone had thought possible 100 years before. Once labor unions began demanding a piece of this pie, everyone benefited.
The AI Revolution will be nothing like that. When robots become as smart and capable as human beings, there will be nothing left for people to do because machines will be both stronger and smarter than humans. Even if AI creates lots of new jobs, it’s of no consequence. No matter what job you name, robots will be able to do it. They will manufacture themselves, program themselves, repair themselves, and manage themselves. If you don’t appreciate this, then you don’t appreciate what’s barreling toward us.
In fact, it’s even worse. In addition to doing our jobs at least as well as we do them, intelligent robots will be cheaper, faster, and far more reliable than humans. And they can work 168 hours a week, not just 40. No capitalist in her right mind would continue to employ humans. They’re expensive, they show up late, they complain whenever something changes, and they spend half their time gossiping. Let’s face it: We humans make lousy laborers.
If you want to look at this through a utopian lens, the AI Revolution has the potential to free humanity forever from drudgery. In the best-case scenario, a combination of intelligent robots and green energy will provide everyone on Earth with everything they need. But just as the Industrial Revolution caused a lot of short-term pain, so will intelligent robots. While we’re on the road to our Star Trek future, but before we finally get there, the rich are going to get richer—because they own the robots—and the rest of us are going to get poorer because we’ll be out of jobs. Unless we figure out what we’re going to do about that, the misery of workers over the next few decades will be far worse than anything the Industrial Revolution produced.
Wait, wait, skeptics will say: If all this is happening as we speak, why aren’t people losing their jobs already? Several sharp observers have made this point, including James Surowiecki in a recent issue of Wired. “If automation were, in fact, transforming the US economy,” he wrote, “two things would be true: Aggregate productivity would be rising sharply, and jobs would be harder to come by than in the past.” But neither is happening. Productivity has actually stalled since 2000 and jobs have gotten steadily more plentiful ever since the Great Recession ended. Surowiecki also points out that job churn is low, average job tenure hasn’t changed much in decades, and wages are rising—though he admits that wage increases are “meager by historical standards.”
True enough. But as I wrote four years ago, since 2000 the share of the population that’s employed has decreased; middle-class wages have flattened; corporations have stockpiled more cash and invested less in new products and new factories; and as a result of all this, labor’s share of national income has declined. All those trends are consistent with job losses to old-school automation, and as automation evolves into AI, they are likely to accelerate.
That said, the evidence that AI is currently affecting jobs is hard to assess, for one big and obvious reason: We don’t have AI yet, so of course we’re not losing jobs to it. For now, we’re seeing only a few glimmers of smarter automation, but nothing even close to true AI.
Remember that artificial intelligence progresses in exponential time. This means that even as computer power doubles from a trillionth of a human brain’s power to a billionth and then a millionth, it has little effect on the level of employment. Then, in the relative blink of an eye, the final few doublings take place and robots go from having a thousandth of human brainpower to full human-level intelligence. Don’t get fooled by the fact that nothing much has happened yet. In another 10 years or so, it will.
So let’s talk about which jobs are in danger first. Economists generally break employment into cognitive versus physical jobs and routine versus nonroutine jobs. This gives us four basic categories of work:
Routine physical: digging ditches, driving trucks
Routine cognitive: accounts-payable clerk, telephone sales
Nonroutine physical: short-order cook, home health aide
Nonroutine cognitive: teacher, doctor, CEO
Routine tasks will be the first to go—and thanks to advances in robotics engineering, both physical and cognitive tasks will be affected. In a recent paper, a team from Oxford and Yale surveyed a large number of machine-learning researchers to produce a “wisdom of crowds” estimate of when computers would be able to take over various human jobs. Two-thirds said progress in machine learning had accelerated in recent years, with Asian researchers even more optimistic than North American researchers about the advent of full AI within 40 years.
But we don’t need full AI for everything. The machine-learning researchers estimate that speech transcribers, translators, commercial drivers, retail sales, and similar jobs could be fully automated during the 2020s. Within a decade after that, all routine jobs could be gone.
Nonroutine jobs will be next: surgeons, novelists, construction workers, police officers, and so forth. These jobs could all be fully automated during the 2040s. By 2060, AI will be capable of performing any task currently done by humans. This doesn’t mean that literally every human being on the planet will be jobless by then—in fact, the researchers suggest it could take another century before that happens—but that’s hardly any solace. By 2060 or thereabouts, we’ll have AI that can do anything a normal human can do, which means that nearly all normal jobs will be gone. And normal jobs are what almost all of us have.
2060 seems a long way off, but if the Oxford-Yale survey is right, we’ll face an employment apocalypse far sooner than that: the disappearance of routine work of all kinds by the mid-2030s. That represents nearly half the US labor force. The consulting firm PricewaterhouseCoopers recently released a study saying much the same. It predicts that 38 percent of all jobs in the United States are “at high risk of automation” by the early 2030s, most of them in routine occupations. In the even nearer term, the World Economic Forum predicts that the rich world will lose 5 million jobs to robots by 2020, while a group of AI experts, writing in Scientific American, figures that 40 percent of the 500 biggest companies will vanish within a decade.
Not scared yet? Kai-Fu Lee, a former Microsoft and Google executive who is now a prominent investor in Chinese AI startups, thinks artificial intelligence “will probably replace 50 percent of human jobs.” When? Within 10 years. Ten years! Maybe it’s time to really start thinking hard about AI.
And forget about putting the genie back in the bottle. AI is coming whether we like it or not. The rewards are just too great. Even if America did somehow stop AI research, it would only mean that the Chinese or the French or the Brazilians would get there first. Russian President Vladimir Putin agrees. “Artificial intelligence is the future, not only for Russia but for all humankind,” he announced in September. “Whoever becomes the leader in this sphere will become the ruler of the world.” There’s just no way around it: For the vast majority of jobs, work as we know it will come steadily to an end between about 2025 and 2060.
So who benefits? The answer is obvious: the owners of capital, who will control most of the robots. Who suffers? That’s obvious too: the rest of us, who currently trade work for money. No work means no money.
But things won’t actually be quite that grim. After all, fully automated farms and factories will produce much cheaper goods, and competition will then force down prices. Basic material comfort will be cheap as dirt.
Why Elon Musk Is Sounding the Alarm on Artificial Intelligence
Still not free, though. And capitalists can only make money if they have someone to sell their goods to. This means that even the business class will eventually realize that ubiquitous automation doesn’t really benefit them after all. They need customers with money if they want to be rich themselves.
One way or another, then, the answer to the mass unemployment of the AI Revolution has to involve some kind of sweeping redistribution of income that decouples it from work. Or a total rethinking of what “work” is. Or a total rethinking of what wealth is. Let’s consider a few of the possibilities.
The welfare state writ large: This is the simplest to think about. It’s basically what we have now, but more extensive. Unemployment insurance will be more generous and come with no time limits. National health care will be free for all. Anyone without a job will qualify for some basic amount of food and housing. Higher taxes will pay for it, but we’ll still operate under the assumption that gainful employment is expected from anyone able to work.
This is essentially the “bury our heads in the sand” option. We refuse to accept that work is truly going away, so we continue to punish people who aren’t employed. Jobless benefits remain stingy so that people are motivated to find work—even though there aren’t enough jobs to go around. We continue to believe that eventually the economy will find a new equilibrium.
This can’t last for too long, and millions will suffer during the years we continue to delude ourselves. But it will protect the rich for a while.
Universal basic income #1: This is a step further down the road. Everyone would qualify for a certain level of income from the state, but the level of guaranteed income would be fairly modest because we would still want people to work. Unemployment wouldn’t be as stigmatized as it is in today’s welfare state, but neither would widespread joblessness be truly accepted as a permanent fact of life. Some European countries are moving toward a welfare state with cash assistance for everyone.
Universal basic income #2: This is UBI on steroids. It’s available to everyone, and the income level is substantial enough to provide a satisfying standard of living. This is what we’ll most likely get once we accept that mass unemployment isn’t a sign of lazy workers and social decay, but the inevitable result of improving technology. Since there’s no personal stigma attached to joblessness and no special reason that the rich should reap all the rewards of artificial intelligence, there’s also no reason to keep the universal income level low. After all, we aren’t trying to prod people back into the workforce. In fact, the time will probably come when we actively want to do just the opposite: provide an income large enough to motivate people to leave the workforce and let robots do the job better.
Silicon Valley—perhaps unsurprisingly—is fast becoming a hotbed of UBI enthusiasm. Tech executives understand what’s coming, and that their own businesses risk a backlash unless we take care of its victims. Uber has shown an interest in UBI. Facebook CEO Mark Zuckerberg supports it. Ditto for Tesla CEO Elon Musk and Slack CEO Stewart Butterfield. A startup incubator called Y Combinator is running a pilot program to find out what happens if you give people a guaranteed income.
There are even some countries that are now trying it. Switzerland rejected a UBI proposal in 2016, but Finland is experimenting with a small-scale UBI that pays the unemployed about $700 per month even after they find work. UBI is also getting limited tryouts by cities in Italy and Canada. Right now these are all pilot projects aimed at learning more about how to best run a UBI program and how well it works. But as large-scale job losses from automation start to become real, we should expect the idea to spread rapidly.
A tax on robots: This is a notion raised by a draft report to the European Parliament and endorsed by Bill Gates, who suggests that robots should pay income tax and payroll tax just like human workers. That would keep humans more competitive. Unfortunately, there’s a flaw here: The end result would be to artificially increase the cost of employing robots, and thus the cost of the goods they produce. Unless every country creates a similar tax, it accomplishes nothing except to push robot labor overseas. We’d be worse off than if we simply let the robots take our jobs in the first place. Nonetheless, a robot tax could still have value as a way of modestly slowing down job losses. Economist Robert Shiller suggests that we should consider “at least modest robot taxes during the transition to a different world of work.” And where would the money go? “Revenue could be targeted toward wage insurance,” he says. In other words, a UBI.
Socialization of the robot workforce: In this scenario, which would require a radical change in the US political climate, private ownership of intelligent robots would be forbidden. The market economy we have today would continue to exist with one exception: The government would own all intelligent robots and would auction off their services to private industry. The proceeds would be divided among everybody.
Progressive taxation on a grand scale: Let the robots take all the jobs, but tax all income at a flat 90 percent. The rich would still have an incentive to run businesses and earn more money, but for the most part labor would be considered a societal good, like infrastructure, not the product of individual initiative.
Wealth tax: Intelligent robots will be able to manufacture material goods and services cheaply, but there will still be scarcity. No matter how many robots you have, there’s only so much beachfront property in Southern California. There are only so many original Rembrandts. There are only so many penthouse suites. These kinds of things will be the only real wealth left, and the rich will still want them. So if robots make the rich even richer, they’ll bid up the price of these luxuries commensurately, and all that’s left is to tax them at high rates. The rich still get their toys, while the rest of us get everything we want except for a view of the sun setting over the Pacific Ocean.
A hundred years from now, all of this will be moot. Society will adapt in ways we can’t foresee, and we’ll all be far wealthier, safer, and more comfortable than we are today—assuming, of course, that the robots don’t kill us all, Skynet fashion.
But someone needs to be thinking hard about how to prepare for what happens in the meantime. Not many are. Last year, for example, the Obama White House released a 48-page report called “Preparing for the Future of Artificial Intelligence.” That sounds promising. But it devoted less than one page to economic impacts and concluded only that “policy questions raised by AI-driven automation are important but they are best addressed by a separate White House working group.”
Regrettably, the coming jobocalypse has so far remained the prophecy of a few Cassandras: mostly futurists, academics, and tech executives. For example, Eric Schmidt, chairman of Google’s parent company, believes that AI is coming faster than we think, and that we should provide jobs to everyone during the transition. “The country’s goal should be full employment all the time, and do whatever it takes,” he says.
Another sharp thinker about our jobless future is Martin Ford, author of Rise of the Robots. Mass joblessness, he warns, isn’t limited to low-skill workers. Nor is it something we can fight by committing to better education. AI will decimate any job that’s “predictable”—which means nearly all of them. Many of us might not like to hear this, but Ford is unsentimental about the work we do. “Relatively few people,” he says, are paid “primarily to engage in truly creative work or ‘blue sky’ thinking.”
Roberto Parada
All this is bad enough, but it’s made worse by the fact that income inequality has already been increasing for decades. “The frightening reality,” Ford says, is that “we may face the prospect of a ‘perfect storm’ where the impacts from soaring inequality, technological unemployment, and climate change unfold roughly in parallel, and some ways amplify and reinforce each other.” Unsurprisingly, he believes the only plausible solution is some form of universal basic income.
So how do we get these ideas into the political mainstream? One thing is certain: The monumental task of dealing with the AI Revolution will be almost entirely up to the political left. After all, when the automation of human labor begins in earnest, the big winners are initially going to be corporations and the rich. Because of this, conservatives will be motivated to see every labor displacement as a one-off event, just as they currently view every drought, every wildfire, and every hurricane as a one-off event. They refuse to see that global warming is behind changing weather patterns because dealing with climate change requires environmental regulations that are bad for business and bad for the rich. Likewise, dealing with an AI Revolution will require new ways of distributing wealth. In the long run this will be good even for the rich, but in the short term it’s a pretty scary prospect for those with money—and one they’ll fight zealously. Until they have no choice left, conservatives are simply not going to admit this is happening, let alone think about how to address it. It’s not in their DNA.
Other candidates are equally unlikely. The military thinks about automation all the time—but primarily as a means of killing people more efficiently, not as an economic threat. The business community is a slave to quarterly earnings and in any case will be too divided to be of much help. Labor unions have good reason to care, but by themselves they’re too weak nowadays to have the necessary clout with policymakers.
Nor are we likely to get much help from governments, which mostly don’t even understand what’s happening. Google’s Schmidt puts it bluntly. “The gap between the government, in terms of their understanding of software, let alone AI, is so large that it’s almost hopeless,” he said at a conference earlier this year. Certainly that’s true of the Trump administration. Asked about AI being a threat to jobs, Treasury Secretary Steven Mnuchin stunningly waved it off as a problem that’s still 50 or 100 years in the future. “I think we’re, like, so far away from that,” he said. “Not even on my radar screen.” This drew a sharp rebuke from former Treasury Secretary Larry Summers: “I do not understand how anyone could reach the conclusion that all the action with technology is half a century away,” he said. “Artificial intelligence is transforming everything from retailing to banking to the provision of medical care.”
So who’s left? Like it or not, the only real choice to sound the alarm outside the geek community is the Democratic Party, along with its associated constellation of labor unions, think tanks, and activists. Imperfect as it is—and its reliance on rich donors makes it conspicuously imperfect—it’s the only national organization that has both the principles and the size to do the job.
Unfortunately, political parties are inherently short-term thinkers. Democrats today are absorbed with fighting President Donald Trump, saving Obamacare, pushing for a $15 minimum wage—and arguing about all those things. They have no time to think hard about the end of work.
Nonetheless, somebody on the left with numbers, clout, power, and organizing energy—hopefully all the above—had better start. Conventional wisdom says Trump’s victory last year was tipped over the edge by a backlash among working-class voters in the Upper Midwest. When blue-collar workers start losing their jobs in large numbers, we’ll see a backlash that makes 2016 look like a gentle breeze. Either liberals start working on answers now, or we risk voters rallying around far more effective and dangerous demagogues than Trump.
Despite the amount of media attention that both robots and AI have gotten over the past few years, it’s difficult to get people to take them seriously. But start to pay attention and you see the signs: An Uber car can drive itself. A computer can write simple sports stories. SoftBank’s Pepper robot already works in more than 140 cellphone stores in Japan and is starting to get tryouts in America too. Alexa can order replacement Pop-Tarts before you know you need them. A Carnegie Mellon computer that seems to have figured out human bluffing beat four different online-poker pros earlier this year. California, suffering from a lack of Mexican workers, is ground zero for the development of robotic crop pickers. Sony is promising a robot that will form an emotional bond with its owner.
These are all harbingers, the way a dropping barometer signals a coming storm—not the possibility of a storm, but the inexorable reality. The two most important problems facing the human race right now are the need for widespread deployment of renewable energy and figuring out how to deal with the end of work. Everything else pales in comparison. Renewable energy already gets plenty of attention, even if half the country still denies that we really need it. It’s time for the end of work to start getting the same attention.


Rise of robotics will upend laws and lead to human job quotas, study says

Innovation in artificial intelligence and robotics could force governments to legislate for quotas of human workers, upend traditional working practices and pose novel dilemmas for insuring driverless cars, according to a report by the International Bar Association.
The survey, which suggests that a third of graduate level jobs around the world may eventually be replaced by machines or software, warns that legal frameworks regulating employment and safety are becoming rapidly outdated.
The competitive advantage of poorer, emerging economies – based on cheaper workforces – will soon be eroded as robot production lines and intelligent computer systems undercut the cost of human endeavour, the study suggests.
While a German car worker costs more than €40 (£34) an hour, a robot costs between only €5 and €8 per hour. “A production robot is thus cheaper than a worker in China,” the report notes. Nor does a robot “become ill, have children or go on strike and [it] is not entitled to annual leave”.
The 120-page report, which focuses on the legal implications of rapid technological change, has been produced by a specialist team of employment lawyers from the International Bar Association, which acts as a global forum for the legal profession.
The report covers both changes already transforming work and the future consequences of what it terms ‘industrial revolution 4.0’. The three preceding revolutions are listed as: industrialisation, electrification and digitalisation. ‘Industry 4.0’ involves the integration of the physical and software in production and the service sector. Amazon, Uber, Facebook, ‘smart factories’ and 3D printing, its says, are among current pioneers.
Loaded: 0%
Progress: 0%
Mute
Robots can predict the future … and so can you
The report’s lead author, Gerlind Wisskirchen – an employment lawyer in Cologne who is vice-chair of the IBA’s global employment institute, said: “What is new about the present revolution is the alacrity with which change is occurring, and the broadness of impact being brought about by AI and robotics.
“Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI, and the legislation once in place to protect the rights of human workers may be no longer fit for purpose, in some cases ... New labour and employment legislation is urgently needed to keep pace with increased automation.”
Peering into the future, the authors suggest that governments will have to decide what jobs should be performed exclusively by humans – for example, caring for babies. “The state could introduce a kind of ‘human quota’ in any sector,” and decide “whether it intends to introduce a ‘made by humans’ label or tax the use of machines,” the report says.
Increased mechanical autonomy will cause problems of how to define legal responsibility for accidents involving new technology such as driverless cars. Will it be the owner, the passengers, or manufacturers who pay the insurance?
“The liability issues may become an insurmountable obstacle to the introduction of fully automated driving,” the study warns. Driverless forklifts are already being used in factories. Over the past 30 years there have been 33 employee deaths caused by robots in the US, it notes.
Limits, it says, will have to be imposed on some aspects of machine autonomy. The study adopts the military principle, endorsed by the Ministry of Defence, that there must always be a ‘human in the loop’ to prevent the development and deployment of entirely autonomous drones that could be programmed to select their own targets.
“A no-go area in the science of AI is research into intelligent weapon systems that open fire without a human decision having been made,” the report states. “The consequences of malfunctions of such machines are immense, so it is all the more desirable that not only the US, but also the United Nations discusses a ban on autonomous weapon systems.”
The term ‘artificial intelligence’ (AI) was first coined by the American computer scientist John McCarthy in 1955. He believed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Software developers are still attempting to achieve his goal.
The gap between economic reality in the self-employed ‘gig economy’ and existing legal frameworks is already growing, the lawyers note. The new information economy is likely to result in more monopolies and a greater income gap between rich and poor because “many people will end up unemployed, whereas highly qualified, creative and ambitious professionals will increase their wealth.”
Among the professions deemed most likely to disappear are accountants, court clerks and ‘desk officers at fiscal authorities’.
Even some lawyers risk becoming unemployed. “An intelligent algorithm went through the European Court of Human Rights’ decisions and found patterns in the text,” the report records. “Having learned from these cases, the algorithm was able to predict the outcome of other cases with 79% accuracy ... According to a study conducted by [the auditing firm] Deloitte, 100,000 jobs in the English legal sector will be automated in the next 20 years.”
The pioneering nation in respect of robot density in the industrial sector is South Korea, which has 437 robots for every 10,000 employees in the processing industry, while Japan has 323 and Germany 282.
Robots may soon invade our home and leisure environments. In the ‘Henn-na Hotel’ in Sasebo, Japan, ‘actroids’ – robots with a human likeness – are deployed, the report says. “In addition to receiving and serving the guests, they are responsible for cleaning the rooms, carrying the luggage and, since 2016, preparing the food.”
The robots are able to respond to the needs of the guests in three languages. The hotel’s plan is to replace up to 90% of the employees by using robots in hotel operations with a few human employees monitoring CCTV cameras to see whether they need to intervene if problems arise.
The traditional workplace is disintegrating, with more part time employees, distance working, and the blurring of professional and private time, the report observes. It is being replaced by “the ‘latte macchiato’ workplace where employees or freelance workers in the cafe around the corner, working from their laptops.”
The workplace may eventually only serve the purpose of maintaining social network between colleagues.

We made a choice…

… and we want to tell you about it. Our journalism now reaches record numbers around the world and more than a million people have supported our reporting. We continue to face financial challenges but, unlike many news organisations, we haven’t put up a paywall. We want our journalism to remain accessible to all, regardless of where they live or what they can afford.
This is The Guardian’s model for open, independent journalism: free for those who can’t afford it, supported by those who can. Readers’ support powers our work, safeguarding our essential editorial independence. This means the responsibility of protecting independent journalism is shared, enabling us all to feel empowered to bring about real change in the world. Your support gives Guardian journalists the time, space and freedom to report with tenacity and rigour, to shed light where others won’t. It emboldens us to challenge authority and question the status quo. And by keeping all of our journalism free and open to all, we can foster inclusivity, diversity, make space for debate, inspire conversation – so more people have access to accurate information with integrity at its heart.
Guardian journalism is rooted in facts with a progressive perspective on the world. We are editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one steers our opinion. At a time when there are so few sources of information you can really trust, this is vital as it enables us to give a voice to those less heard, challenge the powerful and hold them to account. Your support means we can keep investigating and exploring the critical issues of our time.
Our model allows people to support us in a way that works for them. Every time a reader like you makes a contribution to The Guardian, no matter how big or small, it goes directly into funding our journalism. But we need to build on this support for the years ahead. Support The Guardian from as little as $1 – and it only takes a minute. Thank you.

Military Robots: Armed, but How Dangerous?

An open letter calling for a ban on lethal weapons controlled by artificially intelligent machines was signed last week by thousands of scientists and technologists, reflecting growing concern that swift progress in artificial intelligence could be harnessed to make killing machines more efficient, and less accountable, both on the battlefield and off. But experts are more divided on the issue of robot killing machines than you might expect.
The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by many leading AI researchers as well as prominent scientists and entrepreneurs including Elon Musk, Stephen Hawking, and Steve Wozniak. The letter states:
“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”
Rapid advances have indeed been made in artificial intelligence in recent years, especially within the field of machine learning, which involves teaching computers to recognize often complex or subtle patterns in large quantities of data. And this is leading to ethical questions about real-world applications of the technology (see “How to Make Self-Driving Cars Make Ethical Decisions”).
Meanwhile, military technology has advanced to allow actions to be taken remotely, for example using drone aircraft or bomb disposal robots, raising the prospect that those actions could be automated.
The issue of automating lethal weapons has been a concern for scientists as well as military and policy experts for some time. In 2012, the U.S. Department of Defense issued a directive banning the development and use of “autonomous and semi-autonomous” weapons for 10 years. Earlier this year the United Nations held a meeting to discuss the issue of lethal automated weapons, and the possibility of such a ban.
But while military drones or robots could well become more automated, some say the idea of fully independent machines capable carrying out lethal missions without human assistance is more fanciful. With many fundamental challenges still remaining in the field of artificial intelligence, however, it’s far from clear when the technology needed for fully autonomous weapons might actually arrive.
“We’re pushing new frontiers in artificial intelligence,” says Patrick Lin, a professor of philosophy at California Polytechnic State University. “And a lot of people are rightly skeptical that it would ever advance to the point where it has anything called full autonomy. No one is really an expert on predicting the future.”
Lin, who gave evidence at the recent U.N. meeting, adds that the letter does not touch on the complex ethical debate behind the use of automation in weapons systems. “The letter is useful in raising awareness,” he says,  “but it isn’t so much calling for debate; it’s trying to end the debate, saying ‘We’ve figured it out and you all need to go along.’”
Stuart Russell, a leading AI researcher and a professor at the University of California, Berkeley, dismisses this idea. “It’s simply not true that there has been no debate,” he says. “But it is true that the AI and robotics communities have been mostly blissfully ignorant of this issue, maybe because their professional societies have ignored it.”
One issue of debate, which the letter does acknowledge, is that automated weapons could conceivably help reduce unwanted casualties in some situations, since they would be less prone to error, fatigue, or emotion than human combatants.
Those behind the letter have little time for this argument, however.
Max Tegmark, an MIT physicist and founder member of the Future of Life Institute, which co√∂rdinated the letter signing, says the idea of ethical automated weapons is a red herring. “I think it’s rather irrelevant, frankly,” he says. “It’s missing the big point about what is this going to lead to if one starts this AI arms race. If you make the assumption that only the U.S. is going to build these weapons, and the number of conflicts will stay exactly the same, then it would be relevant.”
The Future of Life Institute has issued a more general warning about the long-term risks posed by unfettered AI, cautioning that it could pose serious dangers in the future.
“This is quite a different issue,” Russell says. “Although there is a connection, in that if one is worried about losing control over AI systems as they become smarter, maybe it’s not a good idea to turn over our defense systems to them.”
While many AI experts seem to share this broad concern, some see it as a little misplaced. For example, Gary Marcus, a cognitive scientist and artificial intelligence researcher at New York University, has argued that computers do not need to become artificially intelligent in order to pose many other serious risks, to financial markets or air-traffic systems, for example.

Lin says that while the concept of unchecked killer robots is obviously worrying, the issue of automated weapons deserves a more nuanced discussion. “Emotionally, it’s a pretty straightforward case,” says Lin. “Intellectually I think they need to do more work.”