Cloud computing, Infrastructure as a service (IaaS): nothing new about that. Yet the month of June saw two momentous announcements.
The first was from Microsoft, which announced the addition of IaaS to its Azure platform, along with a new management portal that may prove equally significant, for reasons I will give in a moment.
The second was from Google, at its IO 2012 conference, when it announced Google Compute Engine (GCE), which lets you launch Linux virtual machines (VMs) on Google's platform.
Google may be a new player in the IaaS market but you would also think that managing this stuff will come naturally to a company which has built its own search and cloud services on a massively scaleable cloud. Google also has a good track record in terms of reliability, when you look at its existing Google Apps services. It is not perfect; but then neither are others such as Amazon or Salesforce.com, both of which have occasional service interruptions.
In fact, one of the advantages of major new entrants into the market is the possibility of building fail-safe solutions across several cloud vendors, making it less likely that cloud downtime will cause severe loss of business.
What about Windows Azure? This one has made a big impression on me, partly because (unlike GCE) I have been able to try it out, as well as speaking to Microsoft Corporate VP Scott Guthrie about the new features.
He told me how, soon after he moved to work on Azure in 2011, his team sat down and tried using the service, encountering numerous problems ranging from sign-up difficulties to problems finding documentation.
Since then Microsoft has released not only a wide range of new features, including durable VMs alongside the existing stateless VMs, but also a new administration portal that is a pleasure to use.
Does that matter, when what really counts is the cloud technology, its performance and reliability?
I think it does. A good user experience changes behaviour. It is now so easy to log in and create a VM on Azure, that I will be using this myself when I need to spin up a server to test some software. Click Virtual Machine - From Gallery - pick an operating system - type a name and password, select a machine size, and it is done. A few minutes later you can log in with remote desktop and get working.
With a bit of effort, you can even connect Azure to your internal network.
If it is easy to get started, users are more likely to try it out and, all going well, start using the service in anger. My expectation is that Azure will see a lot more activity as a result.
It has taken too long, but Microsoft is now a real contender in cloud infrastructure.
With Google also coming into play, you may wonder if Amazon will finally feel some heat. I actually doubt that. It is a growing market, and Amazon is the leader by far.
It seems to me that it is more the other, smaller cloud hosters who should worry, as well as those in the on-premise server market. Increasingly, you will not only be testing your new solutions in the cloud, but deploying them there as well.
I have been spending some time with the recently released Sencha Architect 2. This is a development environment with three core components:
Ext JS 4.0 Framework: an HTML5 application framework for desktop browsers
Sencha Touch 2.0: an HTML5 application framework for mobile browsers
Sencha Architect IDE: a visual development tool for both Ext JS and Sencha Touch
Architech is a commercial product, but there are free and open source versions of Ext JS and Touch with various licensing and support permutations available.
I installed Sencha Architect on Windows, which works though I cannot quite describe it as Windows-friendly; there is a Mac flavour to the documentation and nothing quite works in Internet Explorer, Chrome or Safari is recommended.
What you get though is an elegant IDE which is focused 100% on applications, rather than general HTML design. It is not Eclipse-based, which I found interesting having recently also tried the latest Titanium IDE from Appcelerator, which is built on Eclipse. Although Eclipse is a wonderful thing, it does add complexity and overhead compared to a lightweight, dedicated IDE like Sencha Architect.
The frameworks are also interesting. Both Ext JS and Sencha Touch (which are similar in many respects) are based on a Model-View-Controller design, and this is neatly expressed in the IDE which shows Controllers, Views, Stores, Models and Resources in its Project Inspector. A store is essentially a collection of model instances, and might for example be an Ajax proxy retrieving JSON data from a remote URL. The image below uses this technique to show bars in London. The app is designed for a smartphone, though I am displaying it in Google Chrome to test.
These frameworks are not the easiest to pick up quickly, but I was struck by the clean design of both the code and the IDE. Further, Sencha apps generally look good and in many cases the visual components come close to what you can achieve with native code.
From what I can tell, the pressure on developers to create apps that play nicely with a variety of devices, from Windows desktops and laptops through to iPads and Android smartphones, will only increase. Sencha is worth a look.
It is a little early for a review of the year, but not too early to state that 2011 has brought profound changes to the software development world. Although I am thinking mainly of the client, I would also argue that client and server are so intertwined that both are affected. As an example, I have heard developers moving away from SOAP web services not because of any conviction that REST is a better approach, but because the move away from Windows and towards HTML clients makes SOAP web services more difficult to consume.
So what's changed? Simply put, three platforms which once seemed strategic are now in obvious decline. Getting the nuance right for these platforms is tricky. Lots of software still runs and is still widely used long after it has ceased to be strategic for the company which supports it. All the platforms mentioned negatively below are still in active development; they are not going away and will still be running ten years and more from today. They come with health warnings though: depending on these platforms means that your software will gradually become more difficult for users to run and will be left behind by new technologies.
In the run up to the launch of Microsoft's Visual Studio 2010 I spoke to a number of Microsoft platform developers. The consensus then was that Silverlight was very important and possibly the future of Microsoft's client. The view was supported by the company's energetic development efforts for Silverlight. It also made a lot of sense: a lightweight, secure, cloud-centric client that escaped the GUI limitations of Win32, worked in the browser or as a desktop application, and as a bonus run on Mac as well as Windows. Silverlight, as I noted in several articles, is client-side .NET done right.
This is not the place to write a long screed about why Silverlight failed, but rather to note that at the end of 2010 it became obvious that Microsoft was changing direction. At the Professional Developers Conference, October 28-29 2010, it was hardly mentioned, and the company focused instead on HTML and Internet Explorer 9. The full extent of its new strategy was not shown until this year, at the BUILD conference in September.
It is not only external developers that were surprised by what seemed a sudden change of direction. The same seems to be true of many within Microsoft itself. Nor am I sure exactly when someone decided that Silverlight was no longer strategic, though there are clues in the Silverlight release schedule. When Silverlight 4 was unveiled in November 2009 it was still ascendant. Silverlight 5, due out shortly, suggests that it was still considered important in early 2010. Visual Studio LightSwitch released this year was likely planned in part as a way of boosting Silverlight, since it builds Silverlight applications. But nobody is talking about Silverlight 6.
Silverlight is still the development platform for Windows Phone 7, but many observers, myself included, believe this will give way to a variant of the new Windows Runtime (see below) in a future version.
This has been a costly experiment for Microsoft. If the company had done the Windows Runtime, rather than Silverlight, back in 2007, imagine how much stronger would be its position now. That said, it is not all wasted. XAML, the presentation language in Silverlight and in Windows Presentation Foundation, continues in the Windows Runtime, and so does the essence of the cloud-centric, client-secure development model.
Back in 2007 Silverlight seemed to be in part a competitive response to the increasing popularity of Adobe Flash. This month though, Adobe went though wrenching changes of its own, announcing the end of Flash on mobile browsers and a fundamental shift in business strategy away from enterprise development and towards content creation and distribution.
There are plenty of parallels with the Microsoft case. One is that the changes also came as a surprise to many within the company, who just a few weeks before, at the MAX conference in Los Angeles, were talking confidently about the future of Flash and of Flex, the application-centric SDK for Flash. Here is Doug Winnie, a casualty of the inevitable layoffs:
The product managers, evangelists, community managers, and developer relations team members found out the news and the way it was communicated at almost the same exact time you did. They are wrestling with the news and your reaction in real time--so please be supportive of them as they dig through everything.
While on the 3rd day of my vacation in Mexico, I got the call with the explanation that Adobe is doing a major refocus and as part of that, many of us "enterprise" types are no longer required. "Überflussig" I guess is the correct German word for the situation. Keep in mind that I now speak as an individual, not as an Adobe employee. I missed most of the official story due to the timing of my vacation but caught up with a few news outlets to get the rationale.
But isn't Flash still going strong on desktop browsers, and the Flex SDK heading for great new things as an open source project at the Apache Foundation? Well, maybe. Adobe is not betting on that though; it is betting on design tools for content, HTML5, and packaging and distributing publications and apps. Its Flash technology is still critical to how that is done under the covers, but Flash itself will be invisible.
Adobe also says that its LiveCycle middleware will continue to evolve in two specific niches:
We will continue to sell and support our LiveCycle products in the government and financial services markets, two areas where the LiveCycle value proposition remains especially strong.
Again, maybe. This sounds more like Adobe keeping faith with some important customers, than a strong future for LiveCycle.
Microsoft announced another profound change in direction at its BUILD conference in September. Although related to the decline of Silverlight, this one deserves its own heading. What we saw was that the Win32 platform on which Microsoft has built its prosperity for the last twenty-one years or so (Windows 3.0 came out in 1990) is now being shunted aside. "Shunted aside" is the right term because it is still there in the forthcoming Windows 8, but it is side by side with the new Windows Runtime (WinRT) and a touch-friendly user interface called Metro. The company's goal is to create a platform that will succeed against Apple's iOS. It runs on ARM as well as Intel x86 and has its own Windows Marketplace, similar in concept to Apple's App Store.
Leaving aside the merits of WinRT, the big news here is that Microsoft is finally moving away from the Windows desktop on which most of us have done our work day to day for the last two decades. The reasons are obvious: mainly the rise of iOS and the iPad, but also the success of the Mac among developers and at the premium end of the laptop market. Windows was already in decline.
Your Win32 applications will work forever, but Microsoft's energy is now going elsewhere.
That is speculation; but the long-term decline of Win32 is not.
If these platforms are in decline, what the ones that are rising fast? That is simple to answer. Apple iOS, Google Android, and HTML5 in general. Are these good for the next two decades as in Win32, or will be on the deprecated list in a few years? That is hard to say; if I had to rate them in order of likely longevity I would guess this:
2. Apple iOS
3. Google Android
Predictions though are a dangerous game, and I would be interested in other opinions.
Microsoft's BUILD conference last week was a fascinating event. Of course the headline news was about Windows 8, for which we got the full technical details, or at least most of them, for the first time. There is also a public preview, and I tried out Windows 8 on a high-end Samsung tablet loaned for a few days, then again on a VirtualBox virtual machine after my return to the UK.
Windows 8 will no doubt arrive in a year or so, and we can debate whether it will be a storming success, a dismal failure, or something in between. I think it makes a great tablet operating system, but purely considered as a tablet, it will not be easy for Microsoft to break into the market dominated by Apple's iPad and with Android mopping up most of what remains. The purpose of BUILD was to encourage developers to build apps for the new Metro-style user interface, and if Microsoft can build up a decent range of apps with which to populate its new store, the early Windows 8 tablets will have more chance of success.
It is tempting though to think that this is mainly aimed at consumers, and the fact that the sample Metro apps are mostly games or other trivialities reinforces that impression. Does that mean Windows 8 is insignificant for businesses, or for business software developers?
I do not think so. In fact, the more I reflect on BUILD the more it seems to me a pivotal event not just for Microsoft, but for the IT industry. Here is my reasoning.
First, at BUILD Microsoft made it clear that Windows now has two personalities, built on different programming models and in fact different APIs. The old Windows, now referred to as "desktop", trundles on as before. There are few changes from Windows 7 in the preview build, other than that the Start menu switches you to the new Metro-style user interface, a controversial decision that may become user-configurable in the final release. Yes, Explorer now has a ribbon, the file copy dialog is improved, and I am sure that there will be more small and cosmetic changes to desktop Windows before final release, but they will be minor.
It seems to me that Microsoft itself has now re-positioned desktop Windows as a kind of legacy environment, even though it is the one that most of us are likely to use most of the time. Irrespective of whether Metro-style Windows is a success, the implications of this are huge. After all, Windows still dominates business computing. Yes, Microsoft will still invest in desktop Windows; but the strategy is focused on Metro-style and it is plausible that Microsoft will never now make radical changes or advances on the desktop side.
Second, Metro-style Windows 8 is not just a touch-friendly user interface. It is designed as a client for cloud services. This is most obvious when you realise that Microsoft has not included data providers for local network database servers like SQL Server; you are meant to interact with data via web services. Metro-style apps are isolated from one another, and can only communicate with the file system, outside their own isolated storage, via specified, user-controlled mechanisms called Contracts. Windows 8 shows that Microsoft really is embracing cloud computing, and that may be more significant than the fact that it runs nicely on tablets.
Third, and related, is that Microsoft is locking down Windows, especially in the version for ARM which we did not hear much about at BUILD. If Microsoft gets it right, Windows on an ARM tablet will be equally as secure as an Apple iPad. It is hard to be definitive about this, because the role of desktop Windows in the ARM build has yet to be clarified, but from what I can tell Microsoft plans Windows 8 on ARM as essentially a Metro-style platform, with apps available only through the new Windows Store. If users can only install Metro apps, the entry points for malware are greatly reduced. I suspect that Microsoft also has its eye on Apple-like control and profits from being the only source for Windows 8 apps, with interesting implications for software freedom, at least in the consumer market.
If there is a moment in history when desktop computing became legacy, I suspect BUILD 2011 will be a good candidate.
The Windows desktop will be around forever, and in fact the stability of the platform in terms of forward compatibility has if anything improved, now that we know major changes are unlikely at least until Windows 9 in say 2015, and probably never.
More significant though is that the cloud computing model now has the backing of all the major industry players, even the one with what looks like the most to lose.
CEO Steve Ballmer called Windows 8 Microsoft's "riskiest product bet" and I am inclined to agree.
There is intense interest in cloud computing today. But what about take up? I am interested here in the cloud as an application platform, not how many people are using Google Mail.
There is so much noise from vendors about cloud - noting that this is a nebulous and abused term - that it is easy to get the impression that most of us are busy migrating applications to shiny new cloud platforms, and that new projects will almost inevitably be cloud-based.
I spoke recently to Nick Hines, CTO of innovation at global software developer and consultancy ThoughtWorks. This is a company that has embraced Agile methodology and has always struck me as thoughtful and watchful in its approach to software development. It publishes a regular Technology Radar examining technical trends and assessing which are ready for mainstream adoption and which are in decline.
When I spoke to Hines I was researching application development on cloud platforms, and trying to discover how Microsoft's Azure effort was perceived in the real world. I suppose I expected that the company would have many cloud projects on the go and be well placed to assess the strengths and weaknesses of rival platforms.
The most revealing comment came at the end. After chatting about cloud and Azure for half an hour, I asked: could he put a figure on the proportion of ThoughtWorks projects that involve cloud hosting, not just for development, but for production deployment?
That would be relatively small. One to two percent at this point.
he told me. No more than two out of a hundred projects deployed to the cloud. Considering the level of vendor hype around not only Azure but also Amazon web services, Force.com from Salesforce.com, Google App Engine and so on, that is remarkably small.
I must be careful not to mis-represent Hines. He is of the view that not only is cloud significant as a platform, but that it will take over:
This is the way the world is going. We all know it. You can imagine that in 20 years time the idea that companies have their own datacentres is going to be quite anachronistic. How quickly we get there is yet to be seen.
We are then at an interesting point in terms of technology, where we think we can see the future in some respects, but there is a near-consensus in the enterprise development world that it is not yet ready.
Why is it not ready? This of course is a point of debate; but enterprises dislike uncertainty, and there is still uncertainty around cloud platforms. When you ask vendors about the big issues, security and resilience, the best they can do is to point to past performance or give you a speech about the efforts they have made in those areas. CIOs may worry about a nightmare scenario where the system is down and they have no direct control over how it will be fixed. This is responsibility without power; and no, being able to say "we have a service level agreement" is not a solution.
This is also why approaches to the cloud that allow flexibility are popular. Not all risks are equal. For example, you can use a cloud platform as a means of scaling an application at times of peak demand, while keeping the data and code on your own servers. This kind of approach does not yield all the benefits of multi-tenancy or platform as a service, but it means that in case of calamity you can easily deploy to a different platform. The idea of deploying virtual machines to the cloud, while keeping hold of the master image, is popular for the same reason. Hines told me:
People have a greater level of comfort with infrastructure as a service. Whilst it may not have all the advantages that platform as a service offers in terms of reduced administration and so forth, people are more comfortable feeling that they are closer to the tin.
This is the enterprise perspective of course. If you are a start-up or independent developer already accustomed to depending on third-party internet services, then cloud deployment feels less risky.
The consumer perspective is also relevant, despite what I said above concerning Google Mail. If as individuals we learn to trust cloud providers because of our experience with email or personal documents and pictures, then when we step into the business world we will be more inclined to approve a cloud deployment.
Concerning Microsoft Azure, Hines believes that Microsoft needs to prove its ability as a cloud provider, and that a success with the recently launched Office 365 could give Azure a boost, even though it is a different kind of service. It is the same kind of logic: if Microsoft can run Office 365 successfully, the comfort factor over Azure will increase. The reverse is also true.
There is always reason for caution; but it also seems to me that this is a moment of opportunity for those who take well-judged risks with cloud platforms. I would be interested to hear from both developers and CIOs about your perspective on this. Do you trust the cloud yet, and if not now, then when?
Microsoft's cloud computing platform, Windows Azure, was announced at its Professional Developers Conference in October 2008. That was also the Windows 7 PDC, which diverted attention from Azure, but another problem was that Microsoft itself seemed half-hearted about it; it felt like a box the company had to tick in order to keep up with Amazon and Google, but that it was happier to keep on selling on-premise servers. It did not help that signing up for the beta was fiddly and difficult - anyone remember those "developer tokens"? - and that the early developer portal and tools were awkward to work with.
Two and a half years later, Azure is much improved, and those early foundations have proved solid despite the poor developer experience. Microsoft has also made it easier to kick the tires. In particular, if you have an MSDN subscription - which most serious Microsoft-platform developers will have - then since April 12 you get free Azure compute time amounting to 750 hours a month of an "Extra Small" instance with a Professional subscription, or more with the higher-level subscriptions. That means an Extra Small instance can run continuously without charge. Even an extra small instance is not that small a machine: when I tried it I found a virtual Quad Core 2.1 Ghz processor with 767MB RAM. There is also an allowance of storage, SQL Server, and so on. All the current offers, including similar deals for partners, are listed here.
This is a smart move from Microsoft, since previously it was easy to spend money inadvertently. The reason is that Azure charges for deployed instances even when they are not running. You have to delete them completely to stop paying.
But what is Azure? It is worth having a look at some of the more detailed descriptions of how the platform works, like this book extract by Chris Hay and Brian Prince, or this MSDN article, or this description from the perspective of Ryan Barrett who works on Google App Engine. Conceptually, it is a way of deploying applications in the cloud; but it is implemented by deploying virtual machines, with at least one for each application. The reason that deploying an application to Azure takes a few minutes is that the service has to configure a virtual machine image with your code and the correct runtime components, and then copy that virtual machine to one or more runtime locations and spin them up. Azure retains the original, so that it can replace the runtime copy if anything goes wrong. Note the implication that you should never store state or data on the application instance, as it could be wiped at any time. Microsoft also takes responsibility for patching the images with bug-fixes and security updates.
The new VM role is a bit different. In this case you simply upload your own VM image and Azure runs it; you have responsibility for patching and maintenance. It is actually not a good scenario, since you cannot directly patch the VM in the cloud; you have to prepare a new image and upload it, though you can use differencing to avoid a huge upload. The VM is still stateless, in that if you write any data it can get reverted by Azure to your last upload. See Steve Plank's detailed explanation here. It amounts to a good reason not to use the VM role unless you really need it.
This is just the compute aspect of Azure, of course. It is the other services that make this useful, including SQL Server and/or Azure's non-relational storage, AppFabric access control which can federate with your on-premise Active Directory, and so on.
What this means is that if you deploy on Azure, and presuming Microsoft has done a good job with the implementation, you get a high level of resilience and that the burden of maintaining the operating system is removed. With that in mind, Azure's pricing looks reasonable to me. You are not just paying for a VM to run your application, you are paying for a substantial infrastructure behind it. If you think what you would need to install and manage locally to achieve the same level of reliability, then Azure looks like excellent valuel; and I suspect that for some subset of applications it is the best choice on the market.
Every organisation already has a computing infrastructure of some kind, and despite the well-rehearsed advantages of cloud computing, the cost of doing something different and the fear of losing control of your own IT systems - which is a genine concern - can make Azure or its cloud competitors a hard sell. At the same time, it seems to me that anyone planning to deploy a new application, or considering how they deploy an existing one, should be considering cloud as well as on-premise options; and if it is a Windows platform, Azure should be on that list.
If you turn this into a skills issue, this means that knowledge of Windows Azure is an advantage in the job market, and now that Microsoft has made it easy to try it is well worth getting some hands-on experience.
Is the Internet making us stupid? In his latest book, Nicholas Carr suggests that, at the very least, it may be changing our thinking patterns. In The Shallows, he cites a UCLA study in which several seasoned web users were asked to conduct Google searches alongside several web neophytes. Scans show that their brains fired differently, particularly in the dorsolateral prefrontal cortex, which is associated with decision-making. The phrase 'this is your brain on Google' springs to mind.
It was further shown that after a relatively short period of Internet use, the brains of the Internet 'newbies' changed to match those of the web veterans, indicating, according to Carr, that it is relatively easy to change the very physical act of thinking through short-term exposure to the Internet. Now, consider the hundreds of millions of us who sit at a desk for hours each day doing nothing but hypertext-based work. Moreover, reflect on how early our schoolchildren are exposed to these new technologies. I was recently informed, to my horror, that my five-year-old would not be taught cursive handwriting because it was no longer deemed relevant. When Apple coined the phrase 'think different', I doubt it meant us to go this far.
Carr suggests that we are beginning to think more broadly, instead of deeply. Cue the by-now hackneyed arguments about modern students' inability to read a full-length novel, and the attention deficit disorder that plagues the average knowledge worker who is torn between a panoply of hyperlinked documents each day.
Such arguments may well be true, but they're also boring, and overcooked. The real meat of the debate lies in whether this switch to broader thinking is a good thing or not. Or whether, indeed, one has to choose between the two, or whether it is possible to maintain a level of both depth and breadth by compartmentalising our usage of online technologies and devoting time to more meditative activities.
One of the most interesting reflections on this argument lies in the London Review of Books. In his review of The Shallows, Jim Holt says that while it may indeed be possible to eventually augment our own 'postcode' memory-the part of our brain into which we cram facts and figures-with search engines, there may be some unhappy side-effects.
It seems ostensibly more productive to simply wire an Internet connection directly into our heads (something which will surely be doable within a couple of decades), and simply use it as a form of extended memory. Why bother remembering when William Howard Taft was US president, when you could simply think a search query and have the data returned to you?
However, things get sticky when one considers associative memory. This, as Holt points out, is the fountainhead of creativity. It is the landscape in which metaphors emerge, and it is the filter through which big ideas ultimately trickle.
Holt points to the French mathematician Henri Poincare, who would immerse himself in facts and theory for days without conclusion, but who would then reach a sudden epiphany in mathematical theory while stepping on a bus. Poincare concluded that soaking himself in ideas and facts enabled unconscious memory to process them in ways that lead to creative results, which appeared when he wasn't even thinking about mathematics.
This is something that computers can't do, Holt says, warning us against throwing the creative baby out with the bathwater. If we 'outsourced' our postcode memory to the Internet, would we eliminate creativity by stifling the subconscious processing of associative ideas?
There is another potential future, of course. Holt is a self-professed late adopter who doesn't have his finger on the technological pulse. Computers might well be able to learn how to process ideas using a simulated form of associative memory, after all. We are already working quite hard at an industry producing semantically linked information storage, in which concepts are wired directly into the data representing them. Semantic search-in which the search engine understands the ideas that you're looking for-has long been a holy grail for the search business, and we are getting closer. Perhaps, in the future, the Internet might have its own 'Eureka moments', without our help? What would that form of digital creativity look like?