Sunday, May 15, 2016
Heres how Xbox One Can Still Pull Ahead Of Sony
After years of only playing on the Xbox 360 and PlayStation 3, gamers finally saw the next generation of gaming arrive in 2013, when both Microsoft and Sony released their new consoles.
Immediately, the war for console gamers heated up, but after E3 that year, Sony clearly came out on top. First, Microsoft took a lot of heat for some of its controversial leaked policies regarding the Xbox One, including an always-on digital rights management system, as well as forcing players to use Kinect. Eventually, Microsoft renounced those policies, but when the consoles went on sale later that year, players didn’t forget those initial fails.
Sales for the first two years of the next-generation consoles clearly saw the PlayStation 4 as the winner in the next-generation competition. Although Microsoft has done its best to placate gamers, the Xbox One still struggles to keep up with the PlayStation 4.
With Black Friday right around the corner, with special deals for both consoles likely to happen, what can Microsoft do to turn the tide? Here’s how Xbox One can become more competitive with the PlayStation 4.
Backward compatibility
This month saw the arrival of backward compatibility on the Xbox One, which did a lot to drive sales for the console (October 2015 even saw Xbox One sales beat PlayStation 4 sales). Backward compatibility means that players can still play many of their Xbox 360 games on their Xbox One, motivating many to finally upgrade to the next-gen console. More than 100 Xbox 360 games will become playable on the Xbox One, with Microsoft promising more titles to come.
“Xbox One Backward Compatibility will be available at no additional charge, so you won’t have to pay to play games you already own,” wrote Microsoft on the Xbox website. “Not only is backward compatibility one of the most-requested fan features, it’s also part of our vision to provide the biggest and best games catalog without limits, so you can get more out of the investment you’ve made in your game collection, and your games library is not limited to just one device.”
Of all the moves that Microsoft could make to become more competitive with Sony’s console, backward compatibility is probably one of the most important. Sony has PlayStation Now, which charges players for the service of playing older PlayStation titles, but with Microsoft offering something similar for free, Sony can’t compete.
Price
Another way Xbox One can move more units is by lowering the price. Adopting next-generation consoles early on isn’t exactly cheap (with both systems initially going for around $400), but now that these consoles have been out for two years, the price wars will likely begin this year. Microsoft already started offering a Kinect-free version of the Xbox One for $50 less, making the console less expensive than the PlayStation 4.
However, to win the next-gen console race, Microsoft needs to do even better, perhaps by knocking off $100. It’s likely that Black Friday will see such a price drop, perhaps for both systems, but Microsoft could do even better if it kept the price at $299 or less year-round.
Although gamers generally have a little money to throw around, those hesitant to upgrade might become more willing to do so if the price of the console costs less than a PC.
Games
Perhaps where the Xbox One fails the most is in its games exclusive to its system: Microsoft seems focused on only capturing players with interest in third-person shooters and Call of Duty clones. However, it completely misses that market of players who like games with a little more story and content.
Innovative games is where Sony excels, with this year seeing the release of Until Dawn, a story-driven game where every choice made has a consequence and affects the story. These aren’t the kind of games players see much of on Xbox One, and Microsoft needs to start partnering with new game developers who can see beyond the box of standard shooters. Sure, Xbox One gets Rise of the Tomb Raider as an exclusive, but only for a limited time. The game will eventually get a release on the PS4, too. That’s probably not enough to drive console sales.
Also, Sony has always had a good relationship with smaller game developers and embraces the indie game market. Xbox’s relationship with indie game devs is less than stellar, and the company even once established that all Xbox One games had to have a publisher, something many indie games do not have. However, the company changed its mind and now states it allows self-published games.
However, is this a case of too little, too late? Small developers already have a good working relationship with Sony. Most prefer to spend their valuable time and money working with just one platform. Microsoft really needs to start holding out its hand out to these developers to entice them to embrace the Xbox One.
Regardless, Sony has more of a variety of games that cover different genres and appeal to more gamers. Microsoft needs to embrace that if it wants to catch up.
Virtual reality
If this year’s E3 was any example, virtual reality will soo become not just the future of gaming, but the present of the industry. Although developers always toyed with the idea of VR, 2015 was the year that those ideas became a reality. Sony released the prototype for its PlayStation VR system (formerly called “Project Morpheus”) and had more games to demo than any other company at E3 2015. It also works with any standard PS4 console, which means it could come in at a lower price than other systems, too.
However, Microsoft could quickly catch up to Sony with VR because the company has a partnership with Oculus Rift, a system that’s been in development for years. Microsoft also announced a partnership with Valve for that company’s VR system. However, neither of these are proprietary to Microsoft, which means that users would have to pony up more dollars to play VR on the Xbox One than on the PS4.
Microsoft needs to compete in the VR arena, though, to keep up with the PlayStation 4. The company should eventually invest in its own VR system, because working with third-party systems makes the process too complicated for gamers. Microsoft also needs more games to show off with VR, but partnerships with two separate systems could help it achieve that goal
Facebook’s suppression of conservative points of view
Yesterday, my friend Steven Crowder promised that he would not stand idly by after the Gizmodo exposé of Facebook’s suppression of conservative points of view in its trending topics. Today, Steven announced his next move
— a legal demand from Facebook for its records relating to curation and
editorial actions. Steven says that the action was “a long time
coming,” but that Gizmodo’s report made it all the more urgent to act
now:
Some of this is on consumers, though:
Update: Congressional oversight is definitely not the way to proceed, however:
The Gizmodo.com story coincides with, and now potentially provides an explanation for, Facebook’s mismanagement of payments made to Facebook by Mr. Crowder and its woefully biased and unprofessional treatment of his accounts during an ongoing billing dispute. Simultaneously, Facebook has chosen to avoid any transparency in the ongoing removal of certain political posts by Mr. Crowder, ignoring all requests for explanation of purported policy violations. These issues have been ignored by Facebook and its Legal Department despite repeated attempts to resolve the issue on his behalf. Facebook’s ongoing refusal to take action regarding their clear-cut, inexcusable financial errors has necessitated that preliminary legal steps be taken.It’s unclear to what extent Facebook will feel compelled to comply with a pre-lawsuit demand for discovery, but at least it gets the dispute with those alleged to be directly harmed into the judicial arena. That’s one arena, but it’s not the only one. In my column today for The Week, I argue that conservatives should fight in all arenas — even on Facebook — rather than cooperate with the attempts at marginalization:
It is fully understood that Facebook has every right to curate any content they so desire on their platform. However, Facebook’s bullying methods of operation in tandem with both the long-standing evidence of misconduct and the allegations newly brought to light require further investigation given the direct financial ramifications on business clients acting in trust with Facebook.
In the wake of these allegations, discussion among conservatives on social media turned to questions of why conservative sites bother with Facebook at all. Should conservatives just dump Facebook?Conservatives have been here before, lets not forget. We’ve been dealing with editorial bias for decades, and have only been able to break free from the gatekeepers by engaging and exposing them. Bernard Goldberg wrote about this in his seminal book Bias, and how both deliberate and unconscious bias impacted the news that traditional media presented us, both in presentation and by omission. Facebook’s actions are particularly dishonest, though, as they never disclosed that curation and editorial intervention existed in its trending-topics index.
No. This would be a terrible mistake. Facebook is enormous. Nearly three in five American adults have a Facebook account. Failing to be part of Facebook would only make conservatism more insular than it already is.
But an even more compelling reason to engage is this: Pulling out is exactly what liberal Facebook “curators” want. They wanted to banish conservatives from the platform, or failing that, to make them as irrelevant as possible. Why cooperate with that? Conservatives should use the open platform of Facebook and other social-media networks to engage people, make connections, and use those networks to expand the reach and relevance of the conservative agenda.
Some of this is on consumers, though:
Never before have consumers had this much access and choice in news sources — and with it the ability to defeat the editorial gatekeepers and gain a balanced and informed perspective. Relying only on Facebook is akin to reading only the hometown newspaper and believing it contains all the news that’s fit to print. Instead of trusting a social media network to make those choices, consumers should exercise their own choices — and call out those gatekeepers when their biases become so obvious as to be insulting.And the only way to expose that is to remain engaged — and maybe flood the zone so that those biases become even more apparent.
Update: Congressional oversight is definitely not the way to proceed, however:
“If Facebook presents it’s Trending Topics section as the result of a neutral, objective algorithm, but it is in fact subjective and filtered to support or suppress particular political viewpoints, Facebook’s assertion that it maintains a ‘platform for people and perspectives from across the political spectrum’ misleads the public,” Senate Commerce Committee chairman John Thune, R-S.D., wrote in a Tuesday letter to Facebook CEO Mark Zuckerberg. …To put this politely, that’s nonsense. Facebook is not the Internet, but is content on the Internet. It falls under precisely zero points of federal regulation; it’s a widely used but voluntary association, answerable to no federal agencies. Ergo, Congress has no business butting in. Conservatives have plenty of market power on their own to challenge this abuse, and should have no appetite for the camel’s nose of speech regulation to enter into this tent.
“Facebook must answer these serious allegations and hold those responsible to account if there has been political bias in the dissemination of trending news,” Thune said in a statement accompanying the letter. “Any attempt by a neutral and inclusive social media platform to censor or manipulate political discussion is an abuse of trust and inconsistent with the values of an open Internet.”
Thursday, May 12, 2016
decrease in household incomes in most U.S. metropolitan areas
The American middle class is losing ground in metropolitan areas across the country, affecting communities from Boston to Seattle and from Dallas to Milwaukee. From 2000 to 2014 the share of adults living in middle-income households fell in 203 of the 229 U.S. metropolitan areas examined in a new Pew Research Center analysis of government data. The decrease in the middle-class share was often substantial, measuring 6 percentage points or more in 53 metropolitan areas, compared with a 4-point drop nationally.
The shrinking of the middle class at the national level, to the point where it may no longer be the economic majority in the U.S., was documented in an earlier analysis by the Pew Research Center. The changes at the metropolitan level, the subject of this in-depth look at the American middle class, demonstrate that the national trend is the result of widespread declines in localities all around the country.
The widespread erosion of the middle class took place against the backdrop of a decrease in household incomes in most U.S. metropolitan areas. Nationwide, the median income of U.S. households in 2014 stood at 8% less than in 1999, a reminder that the economy has yet to fully recover from the effects of the Great Recession of 2007-09. The decline was pervasive, with median incomes falling in 190 of 229 metropolitan areas examined. Goldsboro ranked near the bottom with a loss of 26% in median income. Midland bucked the prevailing trend with the median income there rising 37% from 1999 to 2014, the greatest increase among the areas examined. 4
In other words, households may have moved from the middle- to upper-income tier simply by staying put or perhaps increasing income only a modest amount. Those households that dropped into the lower-income tier from the middle lost even more income than the median dropped, outpacing the economic downturn and losing even more ground than these comparisons suggest.
Some of these issues relate to local economies, some to the national stagnation that has followed the Great Recession. But this study makes a good data point to explain the political environment of this cycle and the anger that has fueled it. Perhaps nothing might explain it better than the fact that the greater DC area ranks third among the percentage of metropolitan areas with the highest percentage of upper-income households (32%), behind Midland, Texas and the Bridgeport, Connecticut region. The rich get richer … and the powerful, too.
I think it is not helpful that the country is being flooded by low-income illegals. It's undercutting wages not just of entry level positions, but of skilled contractor / construction type jobs across the board.
According to the Pew Hispanic Trust, less than 4% of illegal aliens in the U.S. are employed in agriculture of all types. The vast majorities work in substantial jobs such as CONSTRUCTION, ROOFING, landscaping, and EVEN MASONRY and plumbing, where their low wages have driven many millions of Americans out of work.
I blame the Federal Reserve and the Keynesian philosophy that drives it. The middle class is wealthy enough to have some savings, but generally not wealthy enough to be heavily invested in anything but their own families. This means that Keynes-inspired inflation steals from the middle class and gives that money to the investor class. Quantitative Easing did this even more directly, by deliberately inflating stock market prices (helping investors) to stop deflation (which would have helped those with savings).
Of course minimum wage laws also contribute, by dividing the country into those with high-paying jobs, and those who can't find ANY job.
So do you mean it would be better if more low-income jobs were done by Americans? Or do you mean it would be better if Americans had to pay more to have those jobs done?
Tiffany Zhang It's true that if you flood the country with low wage illegals then consumers of services, as construction services, will pay less.
But the costs are HUGE. The erosion of the incomes of the lower and middle classes means greater inequality and greater entitlement and welfare costs, so ultimately much of the 'savings' of higher income consumers is lost to taxes. Further, in the US we are seeing the thirdworldification and bi-lingualification of the country. That's not necessarily a welcome addition.
Worst of all the demographic change tilting the country heavily toward the low income immigrants and their low income offspring will ... TURN THE COUNTRY SOCIALIST. So those 'savings' in service costs will ultimately be paid for in higher taxes for socialist redistribution and entitlements, and probably in direct socialist property confiscations.
Tuesday, May 10, 2016
Why did Wii U fail
1. The Weird Controller Stew
You can’t get much less simple than the Wii U’s controller setup. There's the gamepad and the Wii remote, with some games requiring both, especially in multiplayer. There's the Pro Controller. There's a GameCube-inspired controller. In multiplayer, only one player can have the gamepad, which can lead to fights over it or general confusion as it is exchanged from hand to hand. The Wii sold itself through simplicity, the Wii U is weirdly complex. One of the most common questions among those new to the console is simply, "what controllers do I need?" (A question I answer here.)2. The Gamepad Perplexes Even Nintendo
When you introduce a new, innovative technology, it's a good idea to have some idea of what to do with it, but it soon become apparent that Nintendo had few ideas for the gamepad. They used it in a few party games, but increasingly ignored it for everything but off-TV play. After a couple of disastrous years and suggestions that the Wii U should simply be re-released without the pricey controller, Nintendo set Shigeru Miyamoto on the task of creating games that would prove the beauty of the controller. Of the three he showed off (which I previewed here), only Star Fox Zero has a release date - actually two release dates, they one they missed and the one they will hopefully make.3. Third Party Support is Virtually Nonexistent
There’s a big difference between getting third party publishers to announce a handful of games for a console before launch and getting real support for it. After failing with a few ports of aging games, then noticing the Wii U's weak sales, most publishers lost all interest in the console.Third party publishers would love to have successful games on a Nintendo system, but for the most part non-Nintendo games just don’t do well, and if there's anything that Nintendo could do to change that, they certainly haven't done it.
4. It's Underpowered
Realising a console about as powerful as the Xbox 360 and the PS3 a year before Sony and Microsoft launched much more powerful consoles seemed like a bad idea when it happened, and the decision hasn't aged well. Not only was the result something that was less intrinsically exciting for hi-def graphics fans, but it created difficulties in adapting XB1/PS4 games to the Wii U, exacerbating its third-party issues5. The Controller is Less Impressive Than an iPhone
If you’re going to do something, do it right. Sure, the Wii gamepad's touch screen is a clever idea, but it also feels like it’s already behind the technological curve. While an iPhone is multi-touch, allowing you to do things like expand a photo by pulling it like taffy, the Wii U’s controller is single touch, like their DS. And while the inner-facing camera might allow games to do cute things like put you on screen, an outer-facing camera that could let the controller perfectly align itself with the TV would seem far more useful.
Legal Consultations by Area Lawyers Fast, Free & Confidential Service
6. There's No Internal Hard Drive
Storage space is yet another of Nintendo’s many blind spots. When they created the Wii they didn’t even consider the issues of downloading games, and even balked when gamers demanded a solution. This time around, they are still relying on flash memory, although at least internal memory has gone to a choice of 8 or 32 GB from the half GB in the Wii. You can, at least, attach a USB drive, although about the last thing in my life I need is yet another device I have to plug into my power strip.7. It's Too Expensive for What It Is
Nintendo had a price advantage at first over the PS4 and XB1, but once you bought an external hard drive to make up for the lack of much internal storage, the prices evened out, once the XB1 dropped Kinect you could get it for the same price as a Wii U. Nintendo offered a less powerful console to bring down a price inflated by the cost of the gamepad, but ultimately failed to get a real price advantage.8. It Ditched Casual Gamers
The Wii was a brilliant idea; a controller so easy and intuitive that it drew a slew of non-gamers into the world of video games. After having sold consoles to millions of non-gamers, Nintendo washed its hands of these converts, putting out a controller with the very collection of triggers and buttons that kept casual gamers from playing videogames pre-Wii. Even though the Wii U still supports the Wii’s remote and nunchuk, they are generally ignored (even when remaking the Wii Game The Legend of Zelda: Twilight Princess they ditched the Wii mote), meaning there’s no reason for casual gamers to even consider upgrading to the new system. This left Nintendo in a fight with Sony and Microsoft for the very core gamers who consider the Wii U beneath notice.9. Yet It Never Committed to Core Gamers
Nintendo claimed that with the Wii U they were making something for the core gamers they had ignored throughout the Wii’s history. The Wii U would not just be a console for tots and grandmas; this time around there would be far more games that could compete with the adult fare found on Sony and Microsoft consoles.But there weren't. Sure, they saved Bayonetta 2 and made it a Wii U exclusive, and two years later they did the same (less successfully) for Devil's Third, but a single core title every couple of years is hardly worth mentioning. Nintendo likes to develop family-friendly games, and while some series, like Legend of Zelda, Pikmin and Metroid Prime, are loved by core gamers, Nintendo’s own output will always skew towards families. With no support from third parties, the Wii U remains the province of tots and grandmas.
10. It Offers Less Extras Than the Competition
While Sony and Microsoft wanted to be both gaming machines and media centers, Nintendo still believes that a game console should just be a game console. It should not play DVDs, or BluRay, or be an MP3 player. Well, they’re wrong. Increasingly, gamers aren’t even buying those things, they just use the versions that come with their consoles. If someone wants a game console and a BluRay player, are they really going to buy one of each when they can just get a PS3 or PS4? As in so many cases, Nintendo is living in the past, and ignoring what we who live in the present have come to expect from our machines; everything.True, you can watch Netflix and Hulu, but you can do so much more with the competition's machines.
Sunday, May 08, 2016
Gears of War 4 would cost $100 million on Xbox one and bankrupt Epic Games
Gears of War 4 is developed by The Coalition for Xbox One.
Back in January 2014, Microsoft announced that it purchased the Gears of War property from Epic Games. Later this fall, Microsoft will be shipping Gears of War 4, a game that would have cost Epic around $100 million to develop had it held onto the IP, estimates Epic boss Tim Sweeney. For Epic, unloading the franchise was a transitional point for the company and a necessary business move.
"The very first Gears of War game cost $12 million to develop, and it made about $100 million in revenue," Sweeney told Polygon in a feature on Epic's evolution. "It was very profitable... By the end of the cycle, Gears of War 3 cost about four or five times more than the original to make. The profit was shrinking and shrinking. We calculated that, if we built Gears of War 4, the budget would have been well over $100 million, and if it was a huge success, we could break even. Anything less could put us out of business. That's what caused us to move and change business models."
While economics played a large part in Epic's decision, the developer also clearly felt constrained by its publishing relationship with Microsoft. If a publisher bankrolls a project, it can veto certain things in a game, and that didn't sit well with Sweeney when Epic was working on Gears of War: Judgment.
"When we released Gears of War: Judgment, a bunch of community players were complaining about all the multiplayer levels we created. We realized that, you know, there are some problems with this, we should rework it, create a bunch of new content and release multiplayer around a new game just like we did in the project that was the genesis of Unreal Tournament," Sweeney said. "We had all these plans to do this, and so we went to Microsoft and we said, 'Hey, we want to do this.' And they said 'No, you don't want to do that.'
"We weren't asking them for money, but you know as our publisher and proprietor of Xbox, it didn't fit into their business plans and so they said no. That made me realize very clearly the risk of having a publisher or anybody standing between game developers and gamers - and how toxic and destructive that process could be to the health of a game and its community."
With AAA games becoming increasingly expensive and risky to make, and business models changing in favor of things like Early Access and games-as-a-service titles, Epic realized the "writing was on the wall," Sweeney said.
The company pivoted and sold 40 percent of its stake to Chinese giant Tencent, and began looking at free-to-play efforts like the indie survival game Fortnite and the MOBA title Paragon.
"We realized that the business really needed to change its approach quite significantly. We were seeing some of the best games in the industry being built and operated as live games over time rather than big retail releases. We recognized that the ideal role for Epic in the industry is to drive that, and so we began the transition of being a fairly narrow console developer focused on Xbox to being a multi-platform game developer and self publisher, and indie on a larger scale," Sweeney described.
Back in January 2014, Microsoft announced that it purchased the Gears of War property from Epic Games. Later this fall, Microsoft will be shipping Gears of War 4, a game that would have cost Epic around $100 million to develop had it held onto the IP, estimates Epic boss Tim Sweeney. For Epic, unloading the franchise was a transitional point for the company and a necessary business move.
"The very first Gears of War game cost $12 million to develop, and it made about $100 million in revenue," Sweeney told Polygon in a feature on Epic's evolution. "It was very profitable... By the end of the cycle, Gears of War 3 cost about four or five times more than the original to make. The profit was shrinking and shrinking. We calculated that, if we built Gears of War 4, the budget would have been well over $100 million, and if it was a huge success, we could break even. Anything less could put us out of business. That's what caused us to move and change business models."
While economics played a large part in Epic's decision, the developer also clearly felt constrained by its publishing relationship with Microsoft. If a publisher bankrolls a project, it can veto certain things in a game, and that didn't sit well with Sweeney when Epic was working on Gears of War: Judgment.
"When we released Gears of War: Judgment, a bunch of community players were complaining about all the multiplayer levels we created. We realized that, you know, there are some problems with this, we should rework it, create a bunch of new content and release multiplayer around a new game just like we did in the project that was the genesis of Unreal Tournament," Sweeney said. "We had all these plans to do this, and so we went to Microsoft and we said, 'Hey, we want to do this.' And they said 'No, you don't want to do that.'
"We weren't asking them for money, but you know as our publisher and proprietor of Xbox, it didn't fit into their business plans and so they said no. That made me realize very clearly the risk of having a publisher or anybody standing between game developers and gamers - and how toxic and destructive that process could be to the health of a game and its community."
With AAA games becoming increasingly expensive and risky to make, and business models changing in favor of things like Early Access and games-as-a-service titles, Epic realized the "writing was on the wall," Sweeney said.
The company pivoted and sold 40 percent of its stake to Chinese giant Tencent, and began looking at free-to-play efforts like the indie survival game Fortnite and the MOBA title Paragon.
"We realized that the business really needed to change its approach quite significantly. We were seeing some of the best games in the industry being built and operated as live games over time rather than big retail releases. We recognized that the ideal role for Epic in the industry is to drive that, and so we began the transition of being a fairly narrow console developer focused on Xbox to being a multi-platform game developer and self publisher, and indie on a larger scale," Sweeney described.
Friday, May 06, 2016
Nintendo NX won't be sold at a loss.
Nintendo president Tatsumi Kimishima has said the company isn't planning on selling the Nintendo NX at a loss. In a post-earnings Q&A
with investors last week, the executive said the company isn't
considering pricing its next system at anything less than break even.
While other console makers have frequently sold hardware at a loss with the intention of making up that money through software sales, Nintendo has almost never used that strategy. One notable exception was .
"When Wii U was launched, the yen was very strong," Kimishima said. "I am assuming that situation will not repeat itself. Selling at a loss at launch would not support the business, so we are keeping that mind in developing NX."
Kimishima also commented on lowered expectations surrounding the Wii U, and how quickly the NX business will make up for the legacy console's flagging performance.
"We are predicting about 800,000 Wii U hardware sales in the fiscal year ending March 2017, which is a decrease of about 2.4 million units compared to the previous year," Kimishima said. "NX and smart device business will be essential to cover this gap, but we also expect download content business to play a role. However, we are planning with the expectation that NX sales will compensate for much of the impact on sales from reduced Wii U hardware sales."
While other console makers have frequently sold hardware at a loss with the intention of making up that money through software sales, Nintendo has almost never used that strategy. One notable exception was .
"When Wii U was launched, the yen was very strong," Kimishima said. "I am assuming that situation will not repeat itself. Selling at a loss at launch would not support the business, so we are keeping that mind in developing NX."
Kimishima also commented on lowered expectations surrounding the Wii U, and how quickly the NX business will make up for the legacy console's flagging performance.
"We are predicting about 800,000 Wii U hardware sales in the fiscal year ending March 2017, which is a decrease of about 2.4 million units compared to the previous year," Kimishima said. "NX and smart device business will be essential to cover this gap, but we also expect download content business to play a role. However, we are planning with the expectation that NX sales will compensate for much of the impact on sales from reduced Wii U hardware sales."
Wednesday, May 04, 2016
Playstation 4.5 specs
It now seems absolutely clear that Sony is working behind the scenes on a new video games console that has been codenamed PlayStation Neo. There is still some debate on whether this is a working title to be abandoned for PlayStation 4.5 or PlayStation 4K at release date, or whether PlayStation Neo could be the ultimate branding of the video games machine.
PlayStation 4.5 coming
Regardless of this, the Japanese corporation seemingly intends to release an improved version of the existing PlayStation 4 by the end of the calendar year. Upgraded components will allow an improved gaming experience, and analysts generally believe that Sony will release this PlayStation 4.5 console in October, with the Japanese corporation naturally targeting the Christmas marketplace.After initial murmurings on the subject, more solid information has now been released which strongly supports the forthcoming release of the PlayStation 4.5, even if Sony itself has yet to confirm the existence of the device. While Sony will be handling the release of this machine assiduously, it must nonetheless be rather pleased that the industry buzz at the moment is all related to its forthcoming console.
It is still unclear precisely what Sony intends to deliver with the PlayStation 4.5, with initial rationale focusing on the ability of the machine to deliver virtual reality and 4K resolution gaming. But an early leak of rumored specs does indicate that Sony’s new machine may struggle to deliver 4K gaming, although it does at least have the potential to provide this resolution via upscaling.
Similarly, there is no provision for a specific virtual reality peripheral to differ from that delivered with the PlayStation 4. Leaks from close to Sony have already indicated that the corporation will forbid developers from releasing peripherals solely intended for the PlayStation 4.5. This scuppers the notion of a specific PlayStation VR unit, although it is possible that the corporation could deliver an improved experience with the PlayStation virtual reality hardware.
Neo codename
What we do know, or at least strongly suspect, after a recent report from the Gamespot sister site Giant Bomb, is that the PlayStation 4.5 is under development, and that it has indeed been codenamed Neo. Sources speaking to the prestigious gaming website have suggested that the PlayStation 4.5 will be based around an upgraded architecture, with improved specifications for CPU, GPU and RAM.While the gulf between these and the PlayStation 4 is perhaps not as large as anticipated, the fact remains that the PlayStation 4.5 will be significantly more powerful than the existing console. Already there is talk of the PlayStation 4.5 delivering titles with increased frame rate over the PlayStation 4, and it does seem likely that the console will be able to deliver games in superior resolution to the 1080p standard of the PS4.
Whether developers will quite be able to push this machine to native 4K resolution is debatable, but Sony has already reportedly devised advice for game manufacturers indicating how to get the best out of the new machine and indeed deliver 4K resolution.
PlayStation family
Sony has a bit of a delicate balancing act to carry out with the PlayStation 4.5, with the Japanese corporation needing to ensure that it does not alienate its existing PlayStation 4 audience, while at the same time delivering an upgraded experience with the PlayStation 4.5 that makes users want to upgrade to the new console.With this in mind, early reports indicate that the Japanese corporation has indeed given a significant amount of thought to creating what is essentially a standardized platform, ensuring that PlayStation 4 owners will be neither excluded nor alienated from the PlayStation family. Thus, all new titles will be required to run on both the PlayStation 4 and PlayStation 4.5, and it seems clear that online gamers from both consoles will be able to compete with one another.
E3 unveiling
With the E3 trade show ahead in June, eager fans of Sony will be hoping that the corporation presents gamers with the first glimpse of the PlayStation 4.5 console in June. There will certainly be a huge amount of buzz ahead of the event in Los Angeles, and it would be something of a knockout blow if Sony was indeed able to deliver the PlayStation 4.5 at this date.The Wall Street Journal has already reported that the console will be announced ahead of the release of the PlayStation VR, with both machines essentially intended to collaborate with one another. It is also believed by analysts that Sony will reveal a new slimline appearance for the PlayStation series when the PlayStation 4.5 is released, with the casing of the device being somewhat smaller and lighter than the existing console.
Chip innovation
Although the capabilities of the PlayStation 4.5 are still very much in the air, Eurogamer has suggested that the Sony console will take advantage of new innovations and improvements in chip architecture. This will enable the internal central processing unit to perform more efficiently, perhaps allowing developers to squeeze more juice out of the system than is the case with the PlayStation 4.Eurogamer particularly links the 7970’s Tahiti processor with the PlayStation 4.5, suggesting that this unit could deliver a 60 percent increase in computing power. Overclocking, faster RAM and possibly more memory is also suggested by the publication, although not all of these mused features are included in the report by Giant Bomb.
Pricing
Undoubtedly the price of the unit will be a major consideration for Sony, as it attempts to keep costs down while also releasing a powerful machine at an attractive price point. Quite a bit for Sony to balance there, but the general belief is that the PlayStation 4.5 may retail at around $399. Sony may offer consumers different hard drive quotients when the device releases, meaning that the under $400 model will be the base version of the console.The very existence of The PlayStation 4.5 poses questions, not least how Microsoft will respond. It will also be intriguing to see what Sony has in mind for the PlayStation 5, and when we will see the next generation console. But the first sighting of the new Sony machine maybe only two months away.
Sunday, May 01, 2016
Our railroads and airports are outdated.Vote trump
Landing at an American airport is a bit like time-traveling into the
past. Outdated design, outdated technology, and outdated regulations are
crippling many U.S. air hubs.
Aviation was born in the U.S., and very quickly, American airplanes and American-trained pilots formed the backbone of global aviation. North America remains the world’s largest aviation market today, yet U.S. air transport is no longer the envy of other nations.
America ranks a mediocre number 30 in the world for quality of air infrastructure, as measured by a survey of executives—and 127 in ticket taxes and airport charges (meaning they’re too high). The country ranks an even lower (131) in carbon dioxide emissions per capita.
There are greater worries ahead: the American Society of Civil Engineers argues that a failure to invest in aviation could represent an estimated cumulative loss of $313 billion by 2020—translating into 350,000 fewer jobs—and a whopping $1.52 trillion by 2040.
The U.S. system is characterized by crowded skies; price competition among airlines and resulting low profitability; competition among airports, leading to congestion in some places and wasted capacity in others; outdated ground facilities; a dearth of intermodal links such as air-to-train connections; high fuel utilization and air pollution; slow technological uptake; and dependence on outdated intergovernmental agreements for access to foreign markets.
The U.S. is falling short and falling behind. That’s true even on the cargo side: Hong Kong has already replaced Memphis as the world’s number one air cargo hub. Today, international travelers represent 11 percent of total U.S. airline passengers.
They contribute more than $116 billion in direct spending and another billion in indirect spending annually. For all of that, they are being underserved: the World Economic forum ranks the U.S. 121st out of 180 countries in terms of the burden of its visa requirements.
_
There are many major airports in the United States, but few are considered good by today’s global standards. In a 2013 survey of the world’s best airports, 12 million passengers ranked more than 400 airports across 39 categories—and no American airport was even in the top 25.
Only four U.S. airports made it into the top 50. Nations in the Middle East and East Asia are building new, efficient, intermodal, technology-enhanced airports, while the U.S. lags behind on basics like core infrastructure. In some U.S. airports, there is no single communication network everyone can use; an emergencies can overload cellphone systems.
Development of airports has been left to cities and regions. Local authorities are often focused on the land rather than landing – the value of retail sales or real-estate near airport facilities, rather than potential throughput, intermodal efficiency, or actual passenger mobility.
As a result, one of the biggest problems with U.S. airports now is reaching them: if you’ve taken a train directly to an airport, it was probably in another country. Our largest cities (New York, Los Angeles, and Chicago) have no means of direct mass-transit from their airports to the population cores, although Atlanta Hartsfield airport is an exception in its light rail connections.
Hong Kong, meanwhile, has brand new high-speed rail that runs every four minutes, complete with fully integrated baggage check-in at their Central railway station downtown.
Getty Images
While the aviation industry has figured out how to lift millions of pounds of aluminum, fuel, cargo, and passengers 35,000 feet into the air—a technological feat in and of itself—its technology is in desperate need of modernization. Ticket agents are often part-time coders, untangling software written in the early 1960s. Cockpit controls look like museum installations when compared to the iPads passengers are using.
Information empowers. Empowered pilots in empowered aircraft can empower passengers—or at least enlighten them. Real-time decision-making can reduce costs and minimize delays. The FAA estimates, for instance, that two-thirds of weather delays are avoidable.
Superior weather information would make it possible to predict airspace and route availability, as well as delays, diversions, and tarmac risk. With greater forecast accuracy for pilots, control towers, and operations centers, airlines could carry less contingency fuel, and flight planners could better anticipate ground holds, deicing, and capacity changes.
The costs of fuel-burn while taxiing amount to $25 per minute; diversions cost about $15,000 to $100,000 per aircraft; and an FAA tarmac delay penalty runs to about $27,500 per passenger. These numbers can add up to millions of dollars on a full flight
Technological innovations provide new hope for U.S. aviation. The Weather Company is growing a service that helps airlines use weather data to change travel paths to avoid turbulence, delivering a smoother, safer, faster, and more efficient travel experience.
The FAA is allowing trial use of iPads to in the cockpit. Airlines are exploring glide-path landing to reduce fuel use and noise during descents. Yet, the barrier to progress is often the burdensome and bureaucratic process of regulatory approval. Modernized oversight is needed to speed up adoption of newer and better technologies.
Michael Smith/Getty Images
Industry associations have called for a national strategy to make America’s air transport system better for everyone who uses it. Imagine flying with pilots empowered by technology to make better decisions for passengers. Imagine next-generation air traffic control generating quintuple wins: greater safety, lower costs, fewer delays, lower carbon emissions, and seamless connections.
America’s air traffic control entity should be made independent—free from the short-term Congressional budget cycle—and the FAA and Department of Transportation should collaborate on moving promising technology forward faster.
Being able to fly with fewer delays won’t be enough if there are also major delays in getting to and from airports with shabby facilities. American institutions operate in silos too often, while airlines fly above them all. If the American public demands an upgraded national air strategy, it can be done.
Aviation was born in the U.S., and very quickly, American airplanes and American-trained pilots formed the backbone of global aviation. North America remains the world’s largest aviation market today, yet U.S. air transport is no longer the envy of other nations.
America ranks a mediocre number 30 in the world for quality of air infrastructure, as measured by a survey of executives—and 127 in ticket taxes and airport charges (meaning they’re too high). The country ranks an even lower (131) in carbon dioxide emissions per capita.
There are greater worries ahead: the American Society of Civil Engineers argues that a failure to invest in aviation could represent an estimated cumulative loss of $313 billion by 2020—translating into 350,000 fewer jobs—and a whopping $1.52 trillion by 2040.
The U.S. system is characterized by crowded skies; price competition among airlines and resulting low profitability; competition among airports, leading to congestion in some places and wasted capacity in others; outdated ground facilities; a dearth of intermodal links such as air-to-train connections; high fuel utilization and air pollution; slow technological uptake; and dependence on outdated intergovernmental agreements for access to foreign markets.
The U.S. is falling short and falling behind. That’s true even on the cargo side: Hong Kong has already replaced Memphis as the world’s number one air cargo hub. Today, international travelers represent 11 percent of total U.S. airline passengers.
They contribute more than $116 billion in direct spending and another billion in indirect spending annually. For all of that, they are being underserved: the World Economic forum ranks the U.S. 121st out of 180 countries in terms of the burden of its visa requirements.
_
There are many major airports in the United States, but few are considered good by today’s global standards. In a 2013 survey of the world’s best airports, 12 million passengers ranked more than 400 airports across 39 categories—and no American airport was even in the top 25.
Only four U.S. airports made it into the top 50. Nations in the Middle East and East Asia are building new, efficient, intermodal, technology-enhanced airports, while the U.S. lags behind on basics like core infrastructure. In some U.S. airports, there is no single communication network everyone can use; an emergencies can overload cellphone systems.
Development of airports has been left to cities and regions. Local authorities are often focused on the land rather than landing – the value of retail sales or real-estate near airport facilities, rather than potential throughput, intermodal efficiency, or actual passenger mobility.
As a result, one of the biggest problems with U.S. airports now is reaching them: if you’ve taken a train directly to an airport, it was probably in another country. Our largest cities (New York, Los Angeles, and Chicago) have no means of direct mass-transit from their airports to the population cores, although Atlanta Hartsfield airport is an exception in its light rail connections.
Hong Kong, meanwhile, has brand new high-speed rail that runs every four minutes, complete with fully integrated baggage check-in at their Central railway station downtown.
Getty Images
While the aviation industry has figured out how to lift millions of pounds of aluminum, fuel, cargo, and passengers 35,000 feet into the air—a technological feat in and of itself—its technology is in desperate need of modernization. Ticket agents are often part-time coders, untangling software written in the early 1960s. Cockpit controls look like museum installations when compared to the iPads passengers are using.
Information empowers. Empowered pilots in empowered aircraft can empower passengers—or at least enlighten them. Real-time decision-making can reduce costs and minimize delays. The FAA estimates, for instance, that two-thirds of weather delays are avoidable.
Superior weather information would make it possible to predict airspace and route availability, as well as delays, diversions, and tarmac risk. With greater forecast accuracy for pilots, control towers, and operations centers, airlines could carry less contingency fuel, and flight planners could better anticipate ground holds, deicing, and capacity changes.
The costs of fuel-burn while taxiing amount to $25 per minute; diversions cost about $15,000 to $100,000 per aircraft; and an FAA tarmac delay penalty runs to about $27,500 per passenger. These numbers can add up to millions of dollars on a full flight
Technological innovations provide new hope for U.S. aviation. The Weather Company is growing a service that helps airlines use weather data to change travel paths to avoid turbulence, delivering a smoother, safer, faster, and more efficient travel experience.
The FAA is allowing trial use of iPads to in the cockpit. Airlines are exploring glide-path landing to reduce fuel use and noise during descents. Yet, the barrier to progress is often the burdensome and bureaucratic process of regulatory approval. Modernized oversight is needed to speed up adoption of newer and better technologies.
Michael Smith/Getty Images
Industry associations have called for a national strategy to make America’s air transport system better for everyone who uses it. Imagine flying with pilots empowered by technology to make better decisions for passengers. Imagine next-generation air traffic control generating quintuple wins: greater safety, lower costs, fewer delays, lower carbon emissions, and seamless connections.
America’s air traffic control entity should be made independent—free from the short-term Congressional budget cycle—and the FAA and Department of Transportation should collaborate on moving promising technology forward faster.
Being able to fly with fewer delays won’t be enough if there are also major delays in getting to and from airports with shabby facilities. American institutions operate in silos too often, while airlines fly above them all. If the American public demands an upgraded national air strategy, it can be done.
Subscribe to:
Posts
(
Atom
)