Archive for June 25th, 2012
NEW DELHI: Your e-mail has been on the internet cloud ever since you started using one. Now brace up to store more content on the cloud-with the convenience of being able to access it anywhere, anytime on any connected device. That’s the promise of cloud computing and this trend will only accelerate over the coming years.
According to research firm Gartner the desire to share content and to access it on multiple devices will motivate consumers to start storing a third of their digital content in the cloud by 2016. Gartner said that just seven percent of consumer content was stored in the cloud in 2011, but this will grow to 36% in 2016.
“Historically, consumers have stored content on their PCs, but as we enter the post-PC era, consumers are using multiple connected devices, the majority of which are equipped with cameras. This is leading to a massive increase in new user-generated content that requires storage,” said Shalini Verma, principal research analyst at Gartner. “With the emergence of the personal cloud, this fast-growing consumer digital content will quickly get disaggregated from connected devices.”
The increased adoption of camera-equipped smartphones and tablets is allowing users to capture huge amounts of photos and videos. Gartner predicts that worldwide consumer digital storage needs will grow from 329 exabytes in 2011 to 4.1 zettabytes in 2016. This includes digital content stored in PCs, smartphones, tablets, hard-disk drives (HDDs), network attached storage (NAS) and cloud repositories.
The bulk of the cloud storage needs of consumers in the near term will be met by social media sites such as Facebook, which offer free storage space for uploading photos and videos for social sharing. The Gartner analyst said that while online backup services are the most well-known cloud storage providers, their total storage allocated to consumers and ‘prosumers’ is small relative to that maintained by social media sites.
Average storage per household will grow from 464 gigabytes in 2011 to 3.3 terabytes in 2016. In 2012, Gartner believes that the adoption of camera-equipped tablets and smartphones will drive consumer storage needs. In the first half of 2012, a shortage in supply of HDDs as a result of the floods in Thailand provided an impetus for cloud storage adoption, leading to an unusual overall growth rate between 2011 and 2012. The future is definitely on the cloud!
Nintendo announced a new, much larger 3DS system this week, the Nintendo 3DS XL. For North America, the system will come in red and blue. The announcement was informative, but also boring. Nintendo has never shied away from flooding the market with colorful systems, but there was once a time when they did it with charisma.
That time was during their “Play it Loud!” marketing campaign of the mid-90’s. Nintendo showed its edgy side when advertising a variety of colored Game Boy systems. Commercials showed rebellious kids giving each other wedgies between skateboarding sessions, all set to Butthole Surfers tunes. Nintendo even suggested that kids should “give the world a wedgie.” It was chaos, and it was awesome.
Maybe we’re stuck in the 90’s, but who wouldn’t want to play today’s games as loud as the commercial seen above?
3DS: 61,793 [DOWN] 3,082 (4.75%)
Vita: 34,459 [UP] 21,076 (157.48%)
PS3: 14,673 [DOWN] 920 (5.90%)
PSP: 9,740 [DOWN] 298 (2.97%)
Wii: 5,963 [DOWN] 66 (1.09%)
PS2: 1,237 [UP] 146 (13.38%)
Xbox 360: 1,084 [UP] 83 (8.29%)
Dsi LL: 689 [UP] 74 (12.03%)
Dsi: 380 [DOWN] 3 (0.78%)
Roadhouse Interactive, developers of MechWarrior Tactics, recently announced the acquisition of social game developer Embassy Interactive in a press release. Now known as Roadhouse Game Studios, the newly-obtained Vancouver team “will focus on new product development for Facebook, smartphones and tablets.”
Embassy Interactive developed the UFC Undisputed Fight Night game for Facebook and Tangram Puzzle Pro for iOS.
TOKYO: Japanese computer makers Fujitsu Ltd. said today his firm is developing a new supercomputer to succeed its K supercomputer built with the state-backed major research institute Riken, which has lost top spot for computing speed to a US supercomputer.
Masami Yamamoto, President Fujitsu Ltd, told a shareholders’ meeting in Yokohama that he hopes to regain the top slot in the world rankings in a few years, after Japan dropped to second place having topped the previous two rankings, according to the announcement by the US-European TOP500 project.
However, he declined to talk about when the new supercomputer will be available or its expected performance.
The K supercomputer, now at Riken’s facilities in Kobe, was the first to log a computing speed of over 10 petaflops per second.
Its name draws upon the Japanese word “kei” for 10 to the 16th power, representing the system’s performance goal of 10 petaflops. One petaflop is the equivalent to 1,000 trillion operations per second.
The Sequoia supercomputer at the US Department of Energy achieved 16.32 petaflops per second, taking the top slot in the world rankings as of June.
LONDON – The decision by the United States and Israel to develop and then deploy the Stuxnet computer worm against an Iranian nuclear facility late in George W. Bush’s presidency marked a significant and dangerous turning point in the gradual militarization of the Internet.
Washington has begun to cross the Rubicon. If it continues, contemporary warfare will change fundamentally as we move into hazardous and uncharted territory.
It is one thing to write viruses and lock them away safely for future use should circumstances dictate it. It is quite another to deploy them in peacetime. Stuxnet has effectively fired the starting gun in a new arms race that is very likely to lead to the spread of similar and still more powerful offensive cyberweaponry across the Internet.
Unlike nuclear or chemical weapons, however, countries are developing cyberweapons outside any regulatory framework.
There is no international treaty or agreement restricting the use of cyberweapons, which can do anything from controlling an individual laptop to disrupting an entire country’s critical telecommunications or banking infrastructure. It is in the United States’ interest to push for one before the monster it has unleashed comes home to roost.
Stuxnet was originally deployed with the specific aim of infecting the Natanz uranium enrichment facility in Iran. This required sneaking a memory stick into the plant to introduce the virus to its private and secure “offline” network. But despite Natanz’s isolation, Stuxnet somehow escaped into the cyberwild, eventually affecting hundreds of thousands of systems worldwide.
This is one of the frightening dangers of an uncontrolled arms race in cyberspace; once released, virus developers generally lose control of their inventions, which will inevitably seek out and attack the networks of innocent parties. Moreover, all countries that possess an offensive cyber capability will be tempted to use it now that the first shot has been fired.
Until recent revelations by The New York Times’ David E. Sanger, there was no definitive proof that America was behind Stuxnet. Now computer security experts have found a clear link between its creators and a newly discovered virus called Flame, which transforms infected computers into multipurpose espionage tools and has infected machines across the Middle East.
The United States has long been a commendable leader in combating the spread of malicious computer code, known as malware, that pranksters, criminals, intelligence services and terrorist organizations have been using to further their own ends. But by introducing such pernicious viruses as Stuxnet and Flame, America has severely undermined its moral and political credibility.
Flame circulated on the Web for at least four years and evaded detection by the big antivirus operators like McAfee, Symantec, Kaspersky Labs and F-Secure – companies that are vital to ensuring that law-abiding consumers can go about their business on the Web unmolested by the army of malware writers, who release nasty computer code onto the Internet to steal our money, data, intellectual property or identities.
But senior industry figures have now expressed deep worries about the state-sponsored release of the most potent malware ever seen.
During the cold war, countries’ chief assets were missiles with nuclear warheads. Generally their number and location was common knowledge, as was the damage they could inflict and how long it would take them to inflict it.
Advanced cyberwar is different: A country’s assets lie as much in the weaknesses of enemy computer defenses as in the power of the weapons it possesses. So in order to assess one’s own capability, there is a strong temptation to penetrate the enemy’s systems before a conflict erupts.
It is no good trying to hit them once hostilities have broken out; they will be prepared and there’s a risk that they already will have infected your systems. Once the logic of cyberwarfare takes hold, it is worryingly pre-emptive and can lead to the uncontrolled spread of malware.
Until now, America has been reluctant to discuss regulation of the Internet with Russia and China. Washington believes any moves toward a treaty might undermine its presumed superiority in the field of cyberweaponry and robotics.
And it fears that Moscow and Beijing would exploit a global regulation of military activity on the Web, in order to justify and further strengthen the powerful tools they already use to restrict their citizens’ freedom on the Net. The United States must now consider entering into discussions, anathema though they may be, with the world’s major powers about the rules governing the Internet as a military domain.
Any agreement should regulate only military uses of the Internet and should specifically avoid any clauses that might affect private or commercial use of the Web. Nobody can halt the worldwide rush to create cyberweapons, but a treaty could prevent their deployment in peacetime and allow for a collective response to countries or organizations that violate it.
Technical superiority is not written in stone, and the United States is arguably more dependent on networked computer systems than any other country in the world. Washington must halt the spiral toward an arms race, which, in the long term, it is not guaranteed to win.
For a company that makes games like Farmville and contributes close to 15% of Facebook’s revenue, Zynga is now aiming to be the ‘Google of games’. “We want to be the gaming company that people cannot live without,” its India head Shan Kadavil told ET at the company’s Bangalore office, its first office outside San Francisco.
However, ever since it went public last year, Zynga has been facing investors’ concerns in the US about its declining usage and long-term growth. Kadavil talks about Zynga’s growth strategies and the changes in Indian gaming industry. Excerpts:
Where is Zynga positioned now?
In July 2007 when Zynga was born, our strategy was to connect the world through social games. If you looked at the e-commerce domain, you had Amazon, if you looked at search, you had Google. These had become brand names and people trusted them. There wasn’t one such space in the gaming industry.
There was an unmet need in gaming and that’s where we want to establish ourselves. Today, 292 million people play our games every month on the web globally – about 65 million everyday. We also have 21 million users from mobile everyday.
The India centre in Bangalore is two years old and it is the only multi-functional center outside the US. Some of our popular games like Mafia Wars and Cityville are run from Zynga’s Bangalore center.
We have teams that take care of the volume of traffic, technology teams that work on data analysis and engineering. Interestingly, we also have a leading Bollywood art director, a cricket commentator, a children’s book author and a fashion designer on board.
Two years down the line, what role would Zynga’s India centre really play?
Almost everything that we do in a particular game happens in India. For instance, in Mafia Wars, from its conceptualisation to the delivery of features happened in Bangalore.
The General Manager for the game is based in India. That is how we operate. So the Bangalore centre is responsible how successful this game is. We are also creating our own IP. As far as Zynga is concerned, we are in the middle of three big shifts. One, social networks are changing the entire business.
Secondly, the app economy has transformed this industry and opened it up to a new level. Third, there is a big shift happening in terms of how you move from advertising-based revenue to virtual currencies in the gaming world. The India team contributes to all three areas from the technology perspective as well as the gaming perspective.
What else is different about running a gaming company? How hard is it to find the right creative talent?
The right talent is really hard to get because this is a very different industry. When we started out, we thought we’ll get enough talent from other gaming companies but that didn’t happen. I think what really helped us was the Bollywood industry.
They use high end technology and have some really creative people. We have found great talent from that industry. We also have a lot of expats working here.
What is happening in India’s gaming industry? How has it evolved over the year?
In India there is a fairly large traction on the mobile side and it is a quite healthy reflection of where this market is headed to. In the past, gaming has always been a niche segment for us. There were few domestic gaming companies and they were mostly developing for some US-based firms.
But that’s changing, thanks to social networks and mobile apps. Mobile is changing the way we operate and think. Zynga itself concentrates a lot on mobile and last year we launched seven games and in Q1 of 2012 about 11 games on mobile.
What are your long-term goals?
We are constantly looking at both organic and inorganic ways to grow. At this point we’re hiring as fast as we can. We are very selective because it is hard to find the appropriate talent in this space.
Light, sleek and super-fast, ultrabooks are the future of laptops and netbooks. However, they are also more expensive than the traditional notebooks, so before you make a sizeable financial investment in one, here’s a look at the features you need to focus on.
Sleek, well-designed and utterly fast, ultrabooks are rapidly redefining the traditional concept of notebooks. The name was coined by Intel in 2011 when it introduced the concept and laid down the parameters-ultrabooks had to be less than an inch thick and weigh less than 1.5 kg, had to boot up in seconds rather than minutes as well as have a long battery life. The result has been a flood of sleek and speedy laptops that are a far cry from the clunky ones of not too long ago. But with so many options in the market, with prices ranging from `45,000 to well over six figures, how do you choose the one that works best for you? Here are a few features that you should consider before investing in an ultrabook.
It may seem obvious, but build quality is very important in an ultrabook. Some manufacturers try to cut corners, and weight, by adding more plastic to the device, while others go for aluminum and fibre-inspired builds. The rule of thumb in these cases is-the lesser the plastic, the longer the ultrabook is likely to last. Also, pay attention to the weight. Anything that weighs over 1.75 kg is getting dangerously close to the traditional notebook territory.
Screen size and resolution
The jury is out on what the ideal screen size of an ultrabook should be. Some insist on thirteen inches but there are ultrabooks with larger as well as smaller displays. Our advice would be to opt for the display that affects portability the least (the bigger the display, the bigger the notebook) while offering good readability. Most ultrabooks come with displays that have 1366 x 768 resolution. Anything above that is a bonus, anything less, unacceptable.
Ultrabooks are designed for very fast operations but are not meant for heavy duty gaming or multimedia. So don’t be impressed if the sales person quotes high figures or opt for the fastest processor you see. In most cases you will find that for a good performance, an Intel Core i5 processor running at a speed of 1.6 GHz will more than suffice for most of your browsing and basic computing requirements.
This is one of the biggest decisions you will have to take when buying an ultrabook. You will often have to choose between hybrid storage, which is a combination of the conventional hard disk drive (HDD) and a solid state drive (SSD), and a single solid state drive. While the hybrid drives will provide more storage, SSDs perform much faster. Though the latter have less capacity and are more expensive than HDDs, they utilise less power, take half the time to boot up and produce no noise. Also, keeping an HDD too close to a powerful magnet may erase the data but this isn’t a problem with SSDs.
Connectivity and ports
The slimness that is the trademark of most ultrabooks often comes at the cost of ports and connectivity. So, check the number of ports it has (USB, HDMI, etc) and the connectivity options (Wi-Fi, 3G, Ethernet, et al) that are on board. Also, check the versions of the connectivity options, that is, what version of Wi-Fi does it use-802.11 a/b/g/n, or which variant of Bluetooth-1.0 or 2.0? The more and newer the options, the better your experience is likely to be.
Keyboards and touchpads
This is a bit of a tricky call, as each ultrabook comes with its own keyboard and touchpad combo. But as these are going to be your primary way of interacting with the device, we suggest that you physically try out the models you like before making your final choice. You will need to see if the keys are well-spaced out and if the trackpad is big enough for your fingers to move comfortably (it often gets squeezed in an attempt to shrink the size of the device) and comes with buttons or support for gestures (to zoom in and out of pages, etc).
Good battery life is one of the cornerstones of the ultrabook concept. But ‘good’ is comparative and depends on what you need. Most ultrabooks come with about six hours of battery life but in many cases, this cannot be extended by using a supplementary battery as in traditional notebooks. So, choose wisely as you are likely to be stuck with what you buy.