I have posted before about Delphi, a rapid development tool forgotten by some, but still the best option for Windows native code development combined with a productive visual component library. That was over two years ago though, shortly after I met with Embarcadero CEO Wayne Williams who promised a version of Delphi that would compile for the Mac as well as Windows.
I had nearly given up waiting; but a couple of months back Embarcadero released a new Delphi with features which, on the surface at least, exceeded my expectations. Here are the highlights:
It is an amazing list of features, particularly considering the rather disappointing first version of Delphi XE. Embarcadero seemed to have done everything promised and more, in one release.
I was keen to try cross-compiling for the Mac, and set it up in what seems to be the most popular way, using a virtual machine on a Mac to run Windows, and running Delphi in the VM. When you install Delphi, or the full RAD Studio which includes C++ Builder and other features, it installs several components that you then run on the Mac side, including the FireMonkey libraries and a server calls the Platform Assistant. You then create a remote profile in Delphi that connects to the Platform Assistant, password protected for security.
Everything worked first try. I added an OS X target to my Windows FireMonkey app, clicked to run, and my simple app opened like magic as an OS X application on the Mac desktop.
Coding for iOS was more work, since you end up exporting the project to Xcode and compiling with the Free Pascal compiler rather than simply using Delphi on Windows, but it did run successfully, and I was able to use my simple test application on an iPhone.
Embarcadero is promising to add Android support at some future date, making this an interesting tool for those who need to support multiple platforms.
Is this the Delphi we have been waiting for? There are a few things that spoil the product. It does seem to have been rushed, which is hardly suprising when you realise that Embarcardero acquired VGScene and DXScene, products for Delphi that form the basis of FireMonkey, from a company called KSDev only around 6 months before RAD Studio XE2 was released. I am not sure what plans Embarcadero had for a cross-platform framework when I spoke to Williams in 2009, but does look like the KSDev deal solved a number of problems.
This rush shows itself in the immaturity of the FireMonkey framework. There are some performance issues as well as limited features compared to what was available with the VCL (Visual Component Library) for Windows. The VCL may be wedded to Windows, but it is hard to leave behind sixteen years of VCL evolution in favour of the first release of a new framework. Existing applications will not necessarily port easily. It is not only a matter of porting from the VCL to FireMonkey. Delphi developers are used to calling the Windows API when necessary, creating code that will not run cross-platform.
It is also worth noting that all FireMonkey controls are custom drawn. There are always compromises in cross-platform development, and in the case of FireMonkey you are giving up the advantages of using native controls on Windows or Mac.
As a cross-platform development tool, Delphi is now up against Adobe Flash Builder, Appcelerator Titanium, PhoneGap, and others. I have been impressed with Adobe AIR in this context, and PhoneGap also has lots of momentum and is ideal for web developers who now need to create mobile apps.
There is every sign though that Embarcadero is serious about FireMonkey and investing in its future. Existing Delphi developers now have a way to move beyond Windows while still using their preferred tool; and the product looks likely to attract new users thanks to its cross-platform capabilities.
Finally I should add that while it is the cross-platform aspect that is most eye-catching, the VCL is not dead and with 64-bit support Delphi is better than ever as a Windows development tool.
Microsoft's BUILD conference last week was a fascinating event. Of course the headline news was about Windows 8, for which we got the full technical details, or at least most of them, for the first time. There is also a public preview, and I tried out Windows 8 on a high-end Samsung tablet loaned for a few days, then again on a VirtualBox virtual machine after my return to the UK.
Windows 8 will no doubt arrive in a year or so, and we can debate whether it will be a storming success, a dismal failure, or something in between. I think it makes a great tablet operating system, but purely considered as a tablet, it will not be easy for Microsoft to break into the market dominated by Apple's iPad and with Android mopping up most of what remains. The purpose of BUILD was to encourage developers to build apps for the new Metro-style user interface, and if Microsoft can build up a decent range of apps with which to populate its new store, the early Windows 8 tablets will have more chance of success.
It is tempting though to think that this is mainly aimed at consumers, and the fact that the sample Metro apps are mostly games or other trivialities reinforces that impression. Does that mean Windows 8 is insignificant for businesses, or for business software developers?
I do not think so. In fact, the more I reflect on BUILD the more it seems to me a pivotal event not just for Microsoft, but for the IT industry. Here is my reasoning.
First, at BUILD Microsoft made it clear that Windows now has two personalities, built on different programming models and in fact different APIs. The old Windows, now referred to as "desktop", trundles on as before. There are few changes from Windows 7 in the preview build, other than that the Start menu switches you to the new Metro-style user interface, a controversial decision that may become user-configurable in the final release. Yes, Explorer now has a ribbon, the file copy dialog is improved, and I am sure that there will be more small and cosmetic changes to desktop Windows before final release, but they will be minor.
It seems to me that Microsoft itself has now re-positioned desktop Windows as a kind of legacy environment, even though it is the one that most of us are likely to use most of the time. Irrespective of whether Metro-style Windows is a success, the implications of this are huge. After all, Windows still dominates business computing. Yes, Microsoft will still invest in desktop Windows; but the strategy is focused on Metro-style and it is plausible that Microsoft will never now make radical changes or advances on the desktop side.
Second, Metro-style Windows 8 is not just a touch-friendly user interface. It is designed as a client for cloud services. This is most obvious when you realise that Microsoft has not included data providers for local network database servers like SQL Server; you are meant to interact with data via web services. Metro-style apps are isolated from one another, and can only communicate with the file system, outside their own isolated storage, via specified, user-controlled mechanisms called Contracts. Windows 8 shows that Microsoft really is embracing cloud computing, and that may be more significant than the fact that it runs nicely on tablets.
Third, and related, is that Microsoft is locking down Windows, especially in the version for ARM which we did not hear much about at BUILD. If Microsoft gets it right, Windows on an ARM tablet will be equally as secure as an Apple iPad. It is hard to be definitive about this, because the role of desktop Windows in the ARM build has yet to be clarified, but from what I can tell Microsoft plans Windows 8 on ARM as essentially a Metro-style platform, with apps available only through the new Windows Store. If users can only install Metro apps, the entry points for malware are greatly reduced. I suspect that Microsoft also has its eye on Apple-like control and profits from being the only source for Windows 8 apps, with interesting implications for software freedom, at least in the consumer market.
If there is a moment in history when desktop computing became legacy, I suspect BUILD 2011 will be a good candidate.
The Windows desktop will be around forever, and in fact the stability of the platform in terms of forward compatibility has if anything improved, now that we know major changes are unlikely at least until Windows 9 in say 2015, and probably never.
More significant though is that the cloud computing model now has the backing of all the major industry players, even the one with what looks like the most to lose.
CEO Steve Ballmer called Windows 8 Microsoft's "riskiest product bet" and I am inclined to agree.
I have been looking at Microsoft's forthcoming SQL Server 2011, code-named Denali, for which the third preview has recently been released. There is plenty to say about Denali, which has many new business intelligence features as well as the intriguing ability to publish a table as a network share accessible from the file system, but I am particularly interested in the new developer tools, known as Project Juneau.
What is Project Juneau? Well, the old SQL Server Management Studio is being redone using the Visual Studio shell, but what is more interesting is the new SQL Server Database Project in the full Visual Studio, along with some new tools for working with databases.
Now at this point I have a confession to make. I have never given Visual Studio Database Projects the attention they deserve. Visual Studio 2008 instroduced a specific database edition, with a specific database project type. In Visual Studio 2010 this became a feature of the Premium and Ultimate editions. Juneau includes the next version of the database project type, now called a SQL Server Database Project.
Just in case others have also paid little attention to Visual Studio database projects, the core feature is the ability to treat databases as code.
How is a database code? It helps to break down what we typically mean by a "database":
1. The data itself.
2. The structure of the database: tables, column types, indexes.
3. Code embedded with the database structure and executed by the database manager, included stored procedures, triggers, user-defined functions.
Of these, it is only the third category that I had previously considered to be code. I was wrong though. The database schema is also code. Further, since the schema can be instantiated by running SQL create statements, you can conveniently represent a schema with that code. Execute the code, and you instantiate the database schema.
Once you start treating the database schema as code, new things become possible. You can do all the things that you usually do with code: put it under version control, refactor it, compare it with other versions, and so on.
This is what Juneau does. When you import a database into Junueau, it becomes a set of SQL create scripts.
This is also what the old Database Project does, so the concept is not new. Microsoft describes the Juneau tools as:
an evolution of the existing Visual Studio Database project type
which can be interpreted to mean that this is a new product which will eventually encompass everything the old product did and more, but that initially there are compromises: while there are new features, there are some other features mssing. Since Juneau is currently in preview it is impossible to be definitive about this yet.
Still, there is plenty of good stuff in Juneau. They follow through on another implication of treating the database as code, which is that you can debug it, by building a local version of the database. The Juneau tools do this, using a new local instance of SQL Server. When you publish the database to production, you have a bunch of options concerning how you want to handle the operation, given that there may be an existing database already present. There is always an option to generate script, rather than executing the operation immediately. The same is true if you change the schema of a connected database in Visual Studio's server explorer. The Juneau tools show all the implications of any change, including warning about data loss when necessary, and offer to generate a script rather than immediately applying the change.
Schema Compare is another useful feature. Imagine that you import am existing database into Visual Studio for application development. This takes three months; but in the meantime the admins have made some changes to the production database, maybe for security or performance reasons. If you have also added some tables and rows for the new application in your development version of the database, this can be awkward to reconcile. Schema Compare lets you see the differences easily.
A goal of the Juneau tools is to make it easy to migrate a database from one platform to another. Microsoft has in mind that some developers will be moving databases from on-premise servers to SQL Server Azure; but irrespective of whether you have cloud hosting in mind, this is a useful feature.
One of the reasons the old Database Projects are perhaps not as well known as they should be is that they are reserved for the high-end Visual Studio editions. I hope Microsoft makes the Juneau tools more widely available, because treating the database as code is a powerful idea, with benefits that should please the operations folk as well as developers.
There is intense interest in cloud computing today. But what about take up? I am interested here in the cloud as an application platform, not how many people are using Google Mail.
There is so much noise from vendors about cloud - noting that this is a nebulous and abused term - that it is easy to get the impression that most of us are busy migrating applications to shiny new cloud platforms, and that new projects will almost inevitably be cloud-based.
I spoke recently to Nick Hines, CTO of innovation at global software developer and consultancy ThoughtWorks. This is a company that has embraced Agile methodology and has always struck me as thoughtful and watchful in its approach to software development. It publishes a regular Technology Radar examining technical trends and assessing which are ready for mainstream adoption and which are in decline.
When I spoke to Hines I was researching application development on cloud platforms, and trying to discover how Microsoft's Azure effort was perceived in the real world. I suppose I expected that the company would have many cloud projects on the go and be well placed to assess the strengths and weaknesses of rival platforms.
The most revealing comment came at the end. After chatting about cloud and Azure for half an hour, I asked: could he put a figure on the proportion of ThoughtWorks projects that involve cloud hosting, not just for development, but for production deployment?
That would be relatively small. One to two percent at this point.
he told me. No more than two out of a hundred projects deployed to the cloud. Considering the level of vendor hype around not only Azure but also Amazon web services, Force.com from Salesforce.com, Google App Engine and so on, that is remarkably small.
I must be careful not to mis-represent Hines. He is of the view that not only is cloud significant as a platform, but that it will take over:
This is the way the world is going. We all know it. You can imagine that in 20 years time the idea that companies have their own datacentres is going to be quite anachronistic. How quickly we get there is yet to be seen.
We are then at an interesting point in terms of technology, where we think we can see the future in some respects, but there is a near-consensus in the enterprise development world that it is not yet ready.
Why is it not ready? This of course is a point of debate; but enterprises dislike uncertainty, and there is still uncertainty around cloud platforms. When you ask vendors about the big issues, security and resilience, the best they can do is to point to past performance or give you a speech about the efforts they have made in those areas. CIOs may worry about a nightmare scenario where the system is down and they have no direct control over how it will be fixed. This is responsibility without power; and no, being able to say "we have a service level agreement" is not a solution.
This is also why approaches to the cloud that allow flexibility are popular. Not all risks are equal. For example, you can use a cloud platform as a means of scaling an application at times of peak demand, while keeping the data and code on your own servers. This kind of approach does not yield all the benefits of multi-tenancy or platform as a service, but it means that in case of calamity you can easily deploy to a different platform. The idea of deploying virtual machines to the cloud, while keeping hold of the master image, is popular for the same reason. Hines told me:
People have a greater level of comfort with infrastructure as a service. Whilst it may not have all the advantages that platform as a service offers in terms of reduced administration and so forth, people are more comfortable feeling that they are closer to the tin.
This is the enterprise perspective of course. If you are a start-up or independent developer already accustomed to depending on third-party internet services, then cloud deployment feels less risky.
The consumer perspective is also relevant, despite what I said above concerning Google Mail. If as individuals we learn to trust cloud providers because of our experience with email or personal documents and pictures, then when we step into the business world we will be more inclined to approve a cloud deployment.
Concerning Microsoft Azure, Hines believes that Microsoft needs to prove its ability as a cloud provider, and that a success with the recently launched Office 365 could give Azure a boost, even though it is a different kind of service. It is the same kind of logic: if Microsoft can run Office 365 successfully, the comfort factor over Azure will increase. The reverse is also true.
There is always reason for caution; but it also seems to me that this is a moment of opportunity for those who take well-judged risks with cloud platforms. I would be interested to hear from both developers and CIOs about your perspective on this. Do you trust the cloud yet, and if not now, then when?
I attended Microsoft TechEd USA last month, where the news highlight was a bunch of new features in Visual Studio. Although Microsoft is not revealing what is coming for Windows 8 development, it has shown a bunch of new features ranging from code clone detection, which aims to find code that was copied and pasted rather than being properly refactored, to new IntelliTrace agents that are designed to find bugs after deployment, rather than just in code you are developing.
They are decent features, and it seems that the new Visual Studio will further extend what is already an impressive range of capabilities. I have spent a lot of time researching Visual Studio 2010, the current version, and considering the scope of the tool, from mobile devices to multi-tier enterprise applications, I hold it in high regard.
Talk to developers about what they want to see in Visual Studio though, and you can bet that neither code clone detection nor IntelliTrace agents will be on their list. They would rather Microsoft fixed annoyances rather than adding features which they might not ever use. Performance is always high on the list: not doing new things, so much as doing the same things faster. Quick access to documentation is another. If you are like me, you often end up searching Google rather than pressing F1, since somehow Google can search the entire internet faster than Visual Studio can summon its own documentation.
Why is Windows Vista considered a flop, whereas Windows 7 has flown off the shelves? I doubt it is to do with thumbnails in the taskbar, or even the Libraries feature, presenting multiple folders as one, a neat feature but often not well understood.
My guess is that better performance is the main reason, followed by hundreds of small usability improvements which Microsoft made. Windows 7 is not perfect, but it generally runs better than its predecessor.
There is always pressure to add features. If you are a software giant like Microsoft, there are marketing reasons; you need those bullet points to win upgrades, or think you do. If you are a corporate developer, there is constant pressure to meet new requirements.
The problem: it is too easy to lose sight of what users often care about more, which is the performance and usability of the applications and features they already use most often.
Somehow, at planning meetings it is hard to justify spending time on improving features that already exist, rather than creating new ones, yet for improving the productivity and even the happiness of users it is often the right thing to do.
I was interested to read Martin Fowler's piece on Cross Platform Mobile. Fowler is Chief Scientist at ThoughtWorks, which does software development and IT consultancy:
I think cross-platform mobile toolkits are a dead-end. It's just too hard for them to really mimic the native experience. If it's worth building a native app, it's worth building it properly, including an individual experience design for that platform.
I do not altogether agree, though he makes a good case and I accept that there are significant obstacles to success. I recommend his piece; all the issues he mentions are real and considerable.
On the other hand, I see it more in terms of acceptable compromises than a binary choice. An Aston Martin is better than a Ford, but a Ford will get me to work and costs less.
The question then becomes: how much compromise do you have to accept if you build a cross-platform mobile app?
Another issue: Fowler says web apps are a capable alternative route, but he adds:
When you do the web app, don't try to make it look and feel like a native app - make it look like a mobile web app
It sounds like good advice; but is there a UI standard for mobile web apps? I am also not sure what this advice means if you wrap a web app as a mobile app using a tool like PhoneGap. Even if you do not want to do this, it may be necessary in order to get access to more native features of the device. Should such an app aim to look more like a web app, or a native app, or is the whole idea a mistake?
Another tricky problem is that with multiple form factors, it is not clear when to apply mobile standards and when to apply desktop and laptop standards. An Apple iPad is a mobile device, but its screen resolution is 1024x768, which is pretty much a full size screen.
There is also a cost involved in not doing cross-platform development. If you only need to support, say, Apple iOS, then fine, get stuck into Xcode 4 and Objective C in the Apple-approved manner. If you need to support more than one platform though, the case is more difficult. Creating an "individual experience design" for each platform means that you have two code bases to maintain, and ensuring parity of features and fixing bugs in both become issues. Multiply the platforms, and it gets worse.
Adobe's Creative Suite is an interesting case. It works on both Mac and Windows, but the UI is more tilted towards consistency between the two versions, rather than looking native to each platform. Dreamweaver CS5.5, for example, looks nothing like a standard Windows application, with its buttons in the top frame of the main window.
However, I would rather have that, than what Microsoft has done with Office on Mac and Windows. The UI is different, the release cycle is different, the features are different, and in general I find it a better experience on Windows than on the Mac, whereas I am equally happy with Creative Suite on either platform.
My guess is that Adobe, with its own internal cross-platform tools, succeeds in sharing more code between the two than Microsoft manages, which is why it is able to deliver synchronised releases. It has tilted the balance towards consistency across platforms, rather than looking native, and that is a valid compromise.
That said, I have been conducting my own experiments with cross-platform toolkits and it is not going all that well. Each toolkit has involved compromises, even with a simple app, and performance has been an issue.
I can also see that the way navigation between difference screens in your app generally works is different on iOS than on Android. Which do you choose?
Cross-platform toolkits may not be desirable then; but they may be inevitable (like death and taxes) unless you have huge resources or are willing to lock-in to a single platform.
I also believe that performance issues will reduce as devices get more powerful, and that web technologies which are common between all the main mobile platforms form a runtime that goes a long way towards solving cross-platform issues.
Microsoft's cloud computing platform, Windows Azure, was announced at its Professional Developers Conference in October 2008. That was also the Windows 7 PDC, which diverted attention from Azure, but another problem was that Microsoft itself seemed half-hearted about it; it felt like a box the company had to tick in order to keep up with Amazon and Google, but that it was happier to keep on selling on-premise servers. It did not help that signing up for the beta was fiddly and difficult - anyone remember those "developer tokens"? - and that the early developer portal and tools were awkward to work with.
Two and a half years later, Azure is much improved, and those early foundations have proved solid despite the poor developer experience. Microsoft has also made it easier to kick the tires. In particular, if you have an MSDN subscription - which most serious Microsoft-platform developers will have - then since April 12 you get free Azure compute time amounting to 750 hours a month of an "Extra Small" instance with a Professional subscription, or more with the higher-level subscriptions. That means an Extra Small instance can run continuously without charge. Even an extra small instance is not that small a machine: when I tried it I found a virtual Quad Core 2.1 Ghz processor with 767MB RAM. There is also an allowance of storage, SQL Server, and so on. All the current offers, including similar deals for partners, are listed here.
This is a smart move from Microsoft, since previously it was easy to spend money inadvertently. The reason is that Azure charges for deployed instances even when they are not running. You have to delete them completely to stop paying.
But what is Azure? It is worth having a look at some of the more detailed descriptions of how the platform works, like this book extract by Chris Hay and Brian Prince, or this MSDN article, or this description from the perspective of Ryan Barrett who works on Google App Engine. Conceptually, it is a way of deploying applications in the cloud; but it is implemented by deploying virtual machines, with at least one for each application. The reason that deploying an application to Azure takes a few minutes is that the service has to configure a virtual machine image with your code and the correct runtime components, and then copy that virtual machine to one or more runtime locations and spin them up. Azure retains the original, so that it can replace the runtime copy if anything goes wrong. Note the implication that you should never store state or data on the application instance, as it could be wiped at any time. Microsoft also takes responsibility for patching the images with bug-fixes and security updates.
The new VM role is a bit different. In this case you simply upload your own VM image and Azure runs it; you have responsibility for patching and maintenance. It is actually not a good scenario, since you cannot directly patch the VM in the cloud; you have to prepare a new image and upload it, though you can use differencing to avoid a huge upload. The VM is still stateless, in that if you write any data it can get reverted by Azure to your last upload. See Steve Plank's detailed explanation here. It amounts to a good reason not to use the VM role unless you really need it.
This is just the compute aspect of Azure, of course. It is the other services that make this useful, including SQL Server and/or Azure's non-relational storage, AppFabric access control which can federate with your on-premise Active Directory, and so on.
What this means is that if you deploy on Azure, and presuming Microsoft has done a good job with the implementation, you get a high level of resilience and that the burden of maintaining the operating system is removed. With that in mind, Azure's pricing looks reasonable to me. You are not just paying for a VM to run your application, you are paying for a substantial infrastructure behind it. If you think what you would need to install and manage locally to achieve the same level of reliability, then Azure looks like excellent valuel; and I suspect that for some subset of applications it is the best choice on the market.
Every organisation already has a computing infrastructure of some kind, and despite the well-rehearsed advantages of cloud computing, the cost of doing something different and the fear of losing control of your own IT systems - which is a genine concern - can make Azure or its cloud competitors a hard sell. At the same time, it seems to me that anyone planning to deploy a new application, or considering how they deploy an existing one, should be considering cloud as well as on-premise options; and if it is a Windows platform, Azure should be on that list.
If you turn this into a skills issue, this means that knowledge of Windows Azure is an advantage in the job market, and now that Microsoft has made it easy to try it is well worth getting some hands-on experience.
Last week I attended QCon London, a conference focused on enterprise development and which spans multiple technologies, including Java, .NET, open source, database, mobile, and general development methodologies.
It is among my favourite conferences thanks to its vendor neutrality and the high quality of the speakers and attendees it attracts. Nevertheless, only a tiny fraction of developers make it to QCon. Vendor events like Microsoft TechEd or Oracle OpenWorld/JavaOne are bigger, and great for keeping up with what that vendor is up to, but tend to be less though-provoking as the content is steered by what one company wants to promote. It still seems to be only a small minority of developers that make it to such events.
There are lots of reasons for staying away: they are too expensive, or your company will not make the time available, or you are not convinced that there is enough signal above the noise, or you are too busy simply keeping up with the work you have already. Are conferences an unnecessary and costly distraction?
My view is the opposite. Of course I am a journalist and it is my job to track what is new; but I do some development as well, and feel that there is a real risk of falling into a safe pattern of work that makes us blind to new ideas that can quickly repay in productivity or quality the time spent in learning about them.
It is also remarkable how a good event can recover enthusiasm for the craft of writing software, something easily lost in the humdrum world of requirements and deadlines and sometimes dysfunctional corporate structures.
As for QCon, there were several things I came away with - though bear in mind that there were six tracks, so one person could only attend one sixth of what was on offer, keynotes aside.
I spent some time on the mobile track. A session by Fraser Spiers on the Apple iPad in education was irritatingly Apple-centric, but also stimulating in showing how a new model of computing can bring about profound and beneficial changes. I think we will see a lot of iPads in business computing too.
Jerome Dochez spoke on the future of Java EE, not the most exciting session but helpful to see how Java is embracing the cloud computing model.
Google's Patrick Copeland spoke on innovation at Google, with the underlying question being how to create a culture that is friendly rather than hostile to innovators and their ideas.
At the .NET State of the Art track I learned about creating RESTful services, both with the open source OpenRasta and with the official WCF Web APIs; I had not been paying attention to this area and it was an example of how attending a conference can highlight existing and important developments that you might otherwise miss. Of course being QCon the speakers were OpenRasta's author Sebastien Lambla and on WCF Microsoft's Glenn Block; exactly the people you would want to hear on these subjects.
On the last day at QCon I got best value from the NoSQL track. A simple example in a talk on graph databases and Neo4j, where the database needed to model social connections and answer questions like "Who are this persons friends, and the friends of his friends", convinced me that SQL relational databases are not the answer to every kind of data storage problem. Note that NoSQL stands for "Not only SQL" rather than "Never SQL"; you should choose the right data store for what you are storing.
The Guardian's Matt Wall described how the Guardian web site is migrating from Oracle to mongoDB, giving the rationale and describing the benefits. I had never looked at mongoDB before and it was a fascinating talk.
At a high level, QCon has its roots in Agile development methodology; and if you study this you find that much of it boils down to fostering communication between all of a project's stakeholders (not just developers). If you came away with one good idea for improving communication in your own organisation, the whole event may well have been worthwhile.
It does not have to be QCon. My point is that going out, talking to your peers, and getting this kind of input is enormously worthwhile, even in busy or economically testing times.
I'm just back from Mobile World Congress in Barcelona - one of the largest tech conferences I have attended. I am not sure of the exact figures, but rumour says around 60,000 total attendance. It was buzzing too, with a sense of excitement as companies and platforms jostle for position.
So what is happening? Here are three thought-provoking trends.
In an otherwise rather flat keynote speech, Google's Eric Schmidt made reference to the fact that smartphones outsold PCs in the fourth quarter of 2010, according to figures from IDC:
It depends how you spin it, of course: those figures for PC sales were actually the largest ever. That said, this is not just about raw sales figures. PCs are still essential to many of us, but mobile is where there is more innovation and energy.
I have a special interest in software development, and the simple message for developers is that you now need a mobile client story for most business or consumer applications.
In some cases the difference between a smartphone and a traditional notebook can almost disappear. I have blogged elsewhere about the Motorola Atrix, which lets you dock your smartphone into a notebook-like shell so you can use a keyboard and large screen. I do not think Motorola's design is quite there yet, as it features two distinct Linux shells with an uncomfortable disconnect between them, but it is close.
The vast Hall 8 at Mobile World Congress was the Android hall, including stands from HTC, LG, Motorola, Samsung, Sony Ericsson, and on the chipset site, Qualcomm, NVIDIA and Texas Instruments. However, the truth is that many of the other halls were dominated by Android as well:
Why Android? Android is a phenomena. It is what every operator wants and also what the consumer is looking for.
said George Guo, CEO of Alatel Onetouch, which has a fast-growing business led at the premium end by Android devices.
It was also significant that the System-on-Chip vendors were talking mainly about Android and their work in optimising for Google's operating system.
Here is a another simple message for developers. If your application does not work on Android, whether that is via an app or a web client, it will lack broad reach in the new world of mobile.
Clearly we must not forget Apple. It did not exhibit at Mobile World Congress, preferring events where it can run its own show, though its influence was widely visible. My impression though is that even Apple will struggle to compete with Android in terms of numbers, though it will likely continue to own the high end.
The big news of MWC was Nokia's alliance with Microsoft over Windows Phone. We will not know for a couple of years how this one plays out; but it was an act of desperation by the Finnish company, based on its failure to compete successfully Apple or Android with its existing line-up of Symbian smartphones, and lack of confidence in its forthcoming MeeGo devices.
What this means is that even if Nokia's big bet pays off, it no longer drives the mobile phone market in the way that Europeans have been used to. It has never done so in the USA, which is one of the reasons for the new alliance.
This also means a second chance for Microsoft's new phone operating system, which has struggled to find operators or manufacturers willing to put real energy behind it.
Nokia ran into plenty of opposition and scepticism at Mobile World Congress. Far from being aligned with that of Microsoft, Nokia's culture is opposed to it, and it is difficult to see any continuity between Symbian, MeeGo, and the Qt framework of the past, and the Windows Phone of the future.
Nevertheless, Nokia is still capable of putting Microsoft's phone on the map. The developer story is interesting, since Microsoft has done a great job of integrating Windows Phone development into Visual Studio, and viewed purely as a mobile development platform this is one of the most productive around, and ideal for extending those corporate apps already built with C#. The optimistic view is that Windows Phone has a strong future as Microsoft's platform migrates towards mobile and cloud.
The pessimistic view is that even Nokia's sponsorship will not disrupt Apple or Android.
It is a tough one to call.
When an editor asked me for a screenshot of MonoTouch, which lets you use an open-source implementation of Microsoft's .NET Framework to target Apple's iPhone and iPad, I obtained it the best way I know, which is by installing it and trying it out.
It is something I have been meaning to try for a while. There is high demand for apps on Apple's iOS, and both the iPhone and the iPad are finding their way into businesses. As all those app requests arrive on developer desks, what is the best way to meet them? They cannot be ignored for ever.
I do not doubt the implication of Steve Jobs' essay, Thoughts on Flash, that, other things being equal, the best way to develop for iOS is with Objective C. Other things are never equal though; and for developers with a ton of existing .NET Framework applications along with skills in C# the possibility of creating iOS apps in a familiar language and framework is compelling. There may even be some code that could be ported.
Monotouch is a commercial product, though you can get started for free, with the main limitation being that you can only deploy to the iPhone emulator.
Installation is not difficult, though there are a couple of big dependencies: Apple's iPhone SDK, and the full desktop version of Mono for OSX. You probably also want MonoDevelop OS X, the Mono IDE. Oh yes, and a Mac of course. Then I got started. The New Solution dialog presents a choice of several iOS project types:
I picked a Windows-based project. MonoDevelop created the project, and I could even compile and run it in the emulator, though it displayed nothing but a beautiful white space.
So far, so familiar for a Windows developer, especially as MonoDevelop feels like a cut-down Visual Studio; but double-click MainWindow.xib in the solution and you are in the alien land of Apple's Interface Builder. Still, thanks to the MonoTouch Hello World tutorial, I soon added some visual elements to the window. Then I selected the AppDelegate class, and added outlets so I could reference them from C# code. You connect outlets to visual elements by drawing a connecting line between them.
My goal was to create a to-do list app, so I added a UITableView for the list, a text input field for new items, and a button for adding them. Removing items can wait for version 2.0. Here is how it looked:
I saved, and returned to MonoDevelop. As promised, I could now reference the outlets in code. I drew shamelessly on this example of how to code UITableViews in MonoTouch, added a few lines of my own, and soon had a working to-do list app running in the emulator.
Admittedly it is not likely to quality for the App Store just yet; but even so I was impressed with how quickly you can assemble something like this.
I was also impressed with MonoDevelop.The code completion and error reporting was excellent.
From the user's perspective, a MonoTouch app behaves like any other iOS app. The main snag is that the Mono runtime library has to be packaged with every app, bloating the size to 5MB or more. In the context of the 16GB or more in an iPhone or iPad, that is not too bad. Note this comment in the discussion on the subject on StackOverflow:
I have 21 apps now on the App Store in MonoTouch. All going fine, great comments from users and lots of sales
In the end, that is what counts. If you are a C# developer with a need for some iPhone or iPad apps, MonoTouch is worth checking out.