As the credit crunch bites, IT contractors are quickly picking up the pace as organisations look to take action as the permanent market begins to slow. Although hiring contractors can be pricier than salaried employees, it's often the case that companies are more likely to pay that little bit extra to hire someone on a temporary basis to allow them to switch that resource on and off as needed. In the current climate it's a certainly a trend we've seen developing.
For those considering a move into contract work there are certainly benefits. For permanent staff, many employment niggles lie with a lack of variety of opportunities available. For many contractors the freedom and flexibility of their work allows them to focus on a specific project and remain 'psychologically distant' from company politics, moving on to new opportunities when the occasion arises or according to their own circumstances.
For the uninitiated - on a typical contract, you'd spend around 8 hours a day, for 5 days a week working on a specific project, with your roles, responsibilities and goals outlined in your contract description. Typically your work would be overseen and monitored by a manager within the organisation. However as jobs tumble, contractors and temporary staff are often the first to feel the effects.
One of the issues people have with moving into contract work is the very cutthroat nature of the business. As competition increases, we've also seen contract pay rates decrease over the last few weeks, so there are a number of reasons to weigh up the pros and cons of such a move before diving in. For those IT staff currently working on contract the expectation of going from one role to another in quick succession has been tempered, forcing contractors to be much more pragmatic about their work.
The overall feeling remains optimistic; as organisations seek short-term measures to offset a period of instability, opportunities will be there. It's for those with the right skills, determination and confidence to go out and take them.
For IT staff the City has always been a popular destination, with attractive salaries on offer, advanced technology at your fingertips and employers eager to snap up the cream of the IT crop. Yet with the shake-up in the City and jobs foundering, IT workers may have fresh thoughts about the importance of job security. With their confidence in the City dented, IT professionals could be forgiven for thinking a role in finance might not be the most secure career path.
However no one can ever predict what the future holds for any industry sector and despite the economic downturn there is still demand for IT professionals in the City. IT remains at the core of most businesses, and improving efficiency, boosting customer experience and often helping organisations remain competitive. As such, few businesses would be willing to cut corners by sacrificing their IT, or the staff that manage it on a daily basis.
Currently the banking sector has shown the same level of demand for IT professionals as before the credit crunch, as many organisations are now keen to set up systems that manage risk more effectively. Middleware, Java and transaction processing specialists all are examples where demand remains high.
Though IT and the City have had a turbulent relationship over the past year, there are still opportunities available out there for the taking, for those who know to look for them.
For years the South East of England has been the Mecca of the UK's IT community, with swathes of IT companies setting up camp along the M3/M4 corridor. Of course the region's status as an IT hub has rarely been questioned and seldom challenged until now.
With councils and development agencies ploughing money into local business schemes, and larger corporations setting up shop in the region, the north of England has recently started giving London and the South East a run for its money on the IT scene. After years of lagging behind in terms of opportunities, benefits and technology, northern cities such as Leeds and Manchester have firmly established themselves as the new hot spots on the IT map; something both businesses and IT professionals are beginning to take heed of.
Research conducted last year showed that just under 90% of IT workers in the South East are considering relocating in the next five years; a huge percentage which seems to imply a growing realisation that opportunities can and do exist outside the Southern counties. What's more in the last few years even well known organisations such as The BBC, The Bank of Scotland and the Bank of New York Mellon have embraced the idea of migrating north as rising costs in the capital and advances in mobile and wireless technology make relocation an increasingly attractive and viable option to big business.
For IT workers, the main concerns in relocating often revolve around both the opportunities available to them and the sacrifices they'd have to make to their salaries, with the north-south pay divide playing a significant role in the decision not to move outside the South East. Also in the current housing climate a move north that doesn't quite work out in the long run may hamper chances of finding and buying a home if and when you decide to head back south. On the plus side however the pay gap is undoubtedly narrowing, as salaries grew by 4.8% in the north of England last year, compared to just 3.7% in the capital. Northern IT workers can now expect to earn an average of just over £30,300 per annum.
In terms of skill demand, the North of England has seen a need for IT support staff with .net and Oracle skills over recent months, as well as some of the more niche skill sets such as business intelligence, as companies look to cut costs by analysing their own performance internally rather than through engaging with outsourcers. As with the South, the Northern IT market is still fairly buoyant with demand for staff still relatively high. If the hectic pace of London or the tedious queues on the M4 don't really appeal to you anymore then the answer may lie due north.
Adobe's MAX conference in San Francisco this week was focused on what it calls the "Flash Platform", a technology stack oriented round the Flash multimedia runtime. The "platform" word highlights the fact that you can code for Flash and have your application run everywhere that Flash runs, including Windows, Mac, Linux, and some mobile devices as long as they are not from Apple. It is not a complete platform, being essentially an Internet client, though there are some server-side pieces such as LiveCycle Data Services, to simplify and optimize communication between Flash clients and Java middleware. You can also blur the distinction between browser and desktop with AIR, which runs Flash outside the browser and adds a local database engine.
So what's new? There was the usual set of announcements. The key ones are as follows:
A new version of Flex and the Eclipse-based Flex Builder IDE, code-named Gumbo. This has a new skinning and component architecture, more advanced text rendering, easier two-way data binding and a new Client Data Management (CDM) feature which from early descriptions looks reminiscent of a .NET dataset. You work with data on the client, storing updates locally, then zap the updates back across the wire in a single update operation. One thing that is not yet clear to me is the extent to which CDM requires LiveCycle on the server; I'll be sure to clarify this in a couple of weeks at MAX Europe (I was not present at the US event). The database aspect is significant, because so many enterprise applications boil down to CRUD (Create, Retrieve, Update, Delete) in one form or another.
Catalyst, formerly code-named Thermo, was previewed. This is a fascinating product which converts Photoshop artwork into Flex code; it also allows designers to create and preview a degree of interaction in their designs. Catalyst shares the same project format as Flex Builder. Again, I will be taking a closer look at MAX Europe. Here's a preview screen grab:
Cocomo (yet another codename) is a cloud effort from Adobe, focused on conferencing. Adobe hosts the services and provides Flex components to enable file sharing, text and VOIP (Voice over IP) chat, whiteboards, and data messaging; there is also user management built in.
Alchemy is a tool that converts C/C++ code to ActionScript, for execution within the Flash player. It's intended for re-use of existing libraries, not for general development.
Third-party announcements that caught my eye included Ensemble's Flex add-in for Visual Studio (though I was underwhelmed by the preview), and Zend's addition of AMF (Action Message Format) into its PHP Framework. AMF is a binary format that optimizes data transfer between servers and Flash clients.
Concurrent programming was a major theme at Microsoft's recent Professional Developers Conference. We all know the reasoning: processors are no longer getting faster, but multiple processors are now commonplace. Even my desktop PC is a quad-core. However, having multiple processors is no guarantee that applications will run faster. For that to happen, the application has to be coded with concurrency in mind. Therefore, if we want to write fast applications, we have to learn concurrent programming.
Unfortunately concurrent programming is hard. Think deadlocks, synchronisation, non-determinant, race conditions, and so on. Fortunately, Microsoft is good at making hard things easy. The original Visual Basic transformed the coding of a graphical user interface from something intricate done in C or C++, to something anyone could do with a few clicks of the mouse.
Now Microsoft aims to do something similar for concurrency. The .NET Framework 4.0 incorporates the Parallel FX Library, which does a great job of simplifying multi-threaded development. The new Task Parallel Library is smart about how many processors it finds at runtime, and spins the right number of threads to take advantage of them. PLINQ lets you easily apply parallelism to LINQ queries. Daniel Moth does a great job of demonstrating the benefits in his session at PDC, which you can watch online; and if you work with .NET I highly recommend it.
The worrying aspect is that while Microsoft is making concurrency easy, it is not making it safe. When I was playing around with Visual Studio 2010 and .NET Framework 4.0, one of the first things I did was to look at my code for counting primes and to see how I could optimize it. I found I could do so easily by changing a For loop to use Parallel.For instead, one of the features of the new library. I got a nice speed boost on a two-core machine, not quite double, but nearly so. One snag: the result changed every time I ran it.
I soon figured out what was wrong. My code declares a numprimes variable, then increments it within the loop. Everything is fine when single-threaded, but if you have two threads running the loop in parallel, they might both increment the variable at nearly the same time. That means reading the value, incrementing it, and writing it back. Occasionally this happens:
Result: the value is one less than it should be. The fix is to lock the value first, or to find a better approach; this exact problem is discussed here (scroll down to Aggregation).
Bugs are nothing new in programming, but concurrency bugs are particularly tiresome. The code may work fine on some machines; in my example, the bug would not occur on a single-core PC. If your loop is a little less tight than a prime number calculator, it might be that the odds of the bug occurring at all are rather small. It could even pass all your unit tests. Deploy the application though, and you can bet that wrong results will soon crop up with the usual potential for calamity.
Haven't we already had, and survived, this problem with previous abstractions like BackgroundWorker, a class in .NET Framework 2.0 that makes it easy to push some code into a background thread? True to some extent, but BackgroundWorker is less dangerous because it is typically used more for keeping an application responsive, than to increase performance with parallel threads on a multi-core system.
The bottom line is that while I am full of admiration for the work Microsoft has done with the Parallel FX, I have a nagging worry that as more and less skilled programmers are encouraged to do this, we may be introducing a new wave of buggy code, as argued in detail by Edward Lee in his paper The Problem with Threads [PDF]. The hardware trend is real though, so I suspect greater concurrency in everyday business applications is coming whether we like it or not.
The other conclusion is that before using something even as apparently simple as Parallel.For, developers have a responsibility to learn about the pitfalls as well as the benefits.
I'm at the Salesforce.com conference in San Francisco, called Dreamforce. The slogan made me smile: the future is looking up. Perhaps it is; but up to which cloud? Microsoft Azure, announced with much fanfare last week at the PDC? Google Apps and AppEngine? Amazon's virtual machines on demand? Or Force.com, the Salesforce.com platform being plugged here? Although Salesforce.com is a CRM application, it is also a platform on which you can create other kinds of applications using the Apex programming language.
As you might expect, Microsoft's announcement is the subject of much discussion here at Dreamforce, with the Salesforce.com executives keen to explain why Azure (which CEO Marc Benioff derides as Azoon) is no match for its own cloud offering. Lindsey Armstrong, co-president EMEA, went so far as to say that Microsoft's effort is not actually cloud computing at all. Her reasoning: with Azure you write your own application and host it on Microsoft's servers. The maintenance of that application remains your responsibility. By contrast, the Force.com platform encourages multi-tenant applications.
The point highlights how the cloud computing buzzword is being used and abused by various vendors, to mean whatever they want it to mean. Underneath though, there are interesting issues. There is pretty much consensus in the industry that more data and more applications are heading for the Internet, with benefits including multi-device support, zero-install applications, availability from anywhere, off-loading of server maintenance, and scale-up on demand. There is no consensus though about what that cloud looks like.
It is easy to see the trade-offs here. The more you run your own code, the more control and freedom you have. On the other hand, the more you build on vendor-specific services, such as Amazon's S3 and SimpleDB, or Microsoft's SQL Services, or Google's Big Table database, the less code you have to write, but at the expense of less flexibility and greater tie-in to the cloud platform you are using. At the extreme is something like Salesforce.com where you share the maximum amount of code, but hand over almost all control to the third-party. App running too slow? Yell at Salesforce.com. App offline? Yell at Salesforce.com. Want to move to another cloud provider? Not easy.
To be fair, the cloud providers like Amazon, Google and Salesforce.com realise that their platforms stand or fall on what they deliver in terms of availability, performance and security. Salesforce.com, for example, has a pretty good record of late, though Amazon's is more spotty. Still, even if the platform is 100% robust there are business issues as well. I've talked to a number of Salesforce.com customers here, and while they are generally happy with the platform, I've heard concern about the cost. Once you commit to the platform, there are limits to what you can easily do if you become unhappy with it in future.
Benioff is open about this lock-in - he even used that word at the press Q&A, explaining how making the platform programmable was increasing his hold on his customers. Does that mean you should not use Salesforce.com? Not necessarily; it is not as if lock-in is anything new in our industry; and the platform itself is impressive. Nevertheless, it strikes me as a significant factor. The question "how do I move away" is important in evaluating next-generation computing infrastructures; and some clouds are harder to escape from than others.