Book of the Future Are you ready for tomorrow? Fri, 18 Jul 2014 10:54:29 +0000 en-US hourly 1 Emerging from the Colossal Cave Thu, 17 Jul 2014 21:21:32 +0000 This post is based on the script from two presentations I gave this week at Creative Kitchen in Liverpool, and Tameside Together in Manchester. You can see the presentation, built using Impress.js, here.


You’re familiar with Moore’s Law, right? Coined by Intel co-founder Gordon Moore back in 1965, it suggests (based on the evidence Moore had witnessed back then) that the number of transistors that can economically be put on a silicon chip doubles every two years. In other words, your computing bang for your buck has been growing at an exponential rate for nearly fifty years.

This law, and a number of parallel laws about the speed of our digital connections, and the amount of stuff we can stick on a hard disk, have described the technology revolution. Devices getting progressively smaller, cheaper, faster, better.

But I think they miss a vital component of what has changed about technology: it has become more human.

I’ve often described early computing experiences as like travelling to an alien planet. The machine you interacted with was a giant monolith, housed in its own environment, speaking its own language and using its own customs.

Now I have a new analogy.

One of the first computer games I ever played was Adventure, or Colossal Cave, as it was otherwise known. It was on a BBC Micro. I was a bit young initially, but I have vivid memories of my dad and cousin getting quite into it. Colossal Cave was a swords and sorcery epic delivered entirely through the medium of a text interface. ‘Choose Your Own Adventure’ or ‘Role & Play’ novels (perhaps also only familiar to people of my particular vintage) on the small screen.

Now I can’t remember the exact plot or characters of the Colossal Cave, but I imagine, somewhere in the depths of this cave, you may have found an ogre. This, for me, is the early computer. Giant, hulking, slow-witted and inhuman.

A little closer to the surface, and a little more evolved, you may find an ork. These are your pre-GUI personal computers. Communication is easier, but they’re still dim and gruff.

Then come your goblins, smaller nimbler and more able to interact. Laptops with graphical interfaces, and even access to the Internet.

Today the elf-like smartphone is all-the-rage. Slender, attractive, and much closer to human in its abilities to interact with touch, motion and voice.

But the elf remains at the edge of the cave. It can look out into the light and shout to us, but it can’t influence our physical world. Without some form of prosthetic there are hard limitations on its reach and strength.

The history of computing over the last half-century for me is one of evolution. Of computers evolving towards a state where their interactions with us are not limited to the screen, and instead they can communicate with us on all the levels that we communicate with each other, and change our environment around us.

As designers and coders we have for years been shining a torch into the Colossal Cave, briefly illuminating the intelligence inside so that we can interact. Now is the time for the computers to emerge from the cave and begin to communicate with us on our terms. But they need our help to do so.

Stepping back from my well-stretched analogy for a minute, there is good reason for us to help.

Do we really want to interact with data via a screen? Even the loveliest high-resolution, touch display, is an artificial environment relative to the majesty of the world around us. It’s also incredibly low-bandwidth. Think about the breadth of senses you have, through which your brain manages to process information, microsecond by microsecond. Why limit ourselves to interacting over a few million pixels when such rich experiences are available to us?

Computers now have so much data at their disposal, and the intelligence to process it, that we can let them be autonomous. The screen and keyboard was created when we had to manually provide them with all of their inputs, all of their instructions. That is no longer the case. Computers can make decisions based on time, date, weather, environment, location, your social graph and any number of other data points. Why bind ourselves to manual control when they are capable of taking on tasks we no longer need to do?

There is a challenge here though. More than one in fact.

The first one is the age-old sci-fi question: should we? Should we give them this much power? Should we leave behind manual labour? What does it mean for jobs?

The answers to these questions are book-length in themselves, but I’m inclined to think we should accept and even encourage this next step in technological progress. For the simple reason that there are more challenges for human minds to tackle. Why not hand the problems we have already nailed over to machines, if they can solve them more efficiently?

The second challenge is around ‘how’. Because for all my bravado and optimism above, this stuff ain’t easy. Or more specifically, the user experience design challenge isn’t easy.

Here’s an example. I’ve been building my own home automation system, as I have documented on this blog. This is both fun (if you’re a geek like me) and a serious experiment: I’m using the smart home as a small scale model for the smart city. The basics are simple: a few hours, a few quid and some cobbled-together code gets you a system that measures all sorts of environmental variables and allows you to trigger electrical devices in response. But as soon as you start trying to design the user experience, it starts to get really complicated.

Take a simple lamp. I want lamps to come on if it’s dark and when there’s someone in the room. And more importantly, turn off when the room is empty, saving me money and cutting my carbon footprint. You’d think the rules for that would be pretty simple, and they are until human behaviour gets involved.

Because sometimes we like it being dark. When we’re trying to sleep, or get a little cosy on the sofa to watch a film. When the house keeps turning the lights on in those situations, it gets pretty annoying. So what do you do? Create modes? Change behaviour throughout the day? Have a manual override? All of these things are possible but what you realise is that the number of permutations is enormous: automating response to human preference is really hard.

This is why we need more people from the creative and digital industries to start experimenting with physical computing. Sure there are a few forward-thinking agencies playing with wearables and microcontrollers. But think about how many websites are produced each year. Imagine how fast we could change our environment and our economy if we produced even a fraction as many digital, physical devices.

There is a particular opportunity here in cities with a manufacturing heritage (and often a surprisingly strong living industry), and a more recent digital scene. Manchester and Liverpool are the two places where I’ve been spreading this message this week.

It’s a simple message and not a particularly original one, but I hope I have carried it to some new audiences. Computing is emerging from the darkness of the cave. Now is the time to greet it and introduce it to our world.

]]> 0
Smart Cities or Home Automation: It’s All About UX Mon, 16 Jun 2014 05:27:03 +0000 When you walk into my utility room, the lights come on. If it’s dark. The latest evolution in my long term project to make my dumb (but very pretty) old house smart is a rules engine. Every time I step in and my way is lighted, my heart leaps a little. It’s a simple thing but it entertains. I’m still getting used to not having to turn the lights off though.

In the utility room and my office, both rooms down in the cellar, the rules are pretty simple: it’s always dark, you always need the lights on. But as I’ve started thinking about rolling these rules out across the rest of the house, I’ve realised things aren’t going to be so simple elsewhere.

Take the living room. Imagine you have a similar rule there: if it’s dark, and someone enters the room, turn on the lights. Great. But then you want to turn the lights down, get cosy and… watch a film. What then? Every time you turn the lights off, the system turns them back on again.

So you start to get into conditionals: if someone turns the light off, leave it off. Then you come in the next night and stub your toe because the lights don’t come on.

Maybe you put it on a timer to reset. Maybe you have a series of programmable ‘mood’ macros that set the rules depending on what you’re doing at the time. The point is not that there aren’t solutions. It’s that they are inevitably complex and need thinking about. They need designing.

Lessons from Santander

Demonstrating points like this in practical terms is part of the reason I started Project Santander, my home automation project, named after the home of the European smart city project run by Telefonica.  The home is a good proxy for the city, where these design issues are only magnified.

Instead of having to satisfy my wife and children’s understandable demands for a house that doesn’t frustrate more than automate, imagine having to satisfy a whole city of people. This is the challenge facing the mayor of Santander and the team working with him from Telefonica and the University of Cantabria.

As I’ve highlighted before, the hardware challenges of Project Santander were (relatively) straightforward. I now have nodes around my house collecting temperature, humidity, light level, and presence, and allowing me to control things like the lights above. These nodes are not dissimilar to what’s being used in Santander. Yet they cost me just £10 each and were built entirely from off-the-shelf, open source, hardware and software. In Santander they rolled out 20,000 sensors for just EU1m – their pricing is not an order of magnitude different to mine.

These 20,000 sensors generate relatively little data to transfer and store: when there were 12,000 sensors in the city, they were storing just 5MB of data per day. This makes sense: a temperature reading can be stored as a single byte of data – potentially less. Do the maths:

  • 1 byte per reading
  • 60/5 = 12 readings per hour = 12 bytes per hour
  • 12×24 = 288 readings per day = 288 bytes per day per sensor
  • 12,000 x 288 = 3,456,000 bytes per day = 3,375 kbytes per day = 3.29 MB per day

Bear in mind the temperature probably doesn’t change every five minutes. And even if it does, you can do some smart things to store many fewer readings than this, or store them more efficiently. Likewise with humidity, noise level, light, parking space usage. This is what accounts for the fact that many nodes have more than one sensor. This stuff is not particularly ‘big’ data.

So it’s cheap and easy to collect, transmit and store. What’s the challenge?

Making it useful. Making it intuitive. Making it human.

This is a user experience (UX) challenge. A design challenge. And it’s an opportunity that I don’t think that much of the tech community  - particularly those with the biggest skills for and focus on user interface design – has yet grasped.

]]> 0
Stop Redesigning the Web. Start Redesigning the World. Thu, 22 May 2014 07:44:29 +0000 Friction ignites innovation.

The examples are all around us. Take banking. I tried to send money abroad this week. Urgh. A thoroughly 20th century experience, replete with long acronyms, complex codes and lots of cost – 25% of the amount I was trying to send.

It’s no surprise that finance, banking and payments are hot spaces for innovation, from crowd funding and peer funding, to merchant systems, to whole new currencies. Organisations like PayPal, iZettle and bitcoin are tackling a staid old system that has failed to move truly into the internet age.

That age is characterised not just by technology but by culture: openness and sharing, of hardware and software, interfaces and protocols. A culture of speed and action.

Increasingly that culture is moving out of the digital world and into the physical. The new hardware categories – wearables, smart home, 3D printing – are much more open than their predecessors. If the hardware itself is not an open design, based on off-the-shelf components, then the software usually offers an API. Having had my Nest smart thermostat installed, I can’t wait to start playing with its API, integrating it into my own home automation system.

That system, Project Santander, has itself been built on these internet principles: rapidly prototyped using off-the-shelf hardware and shared software components. The most challenging part of its construction? The core software.

This software has been created using the simplest of web technologies: PHP, MySQL, HTML, CSS, Javascript. Because I don’t know anything else – in fact I barely know this. As I will happily concede to anyone, I am no coder.

Imagine what you can do with greater skills. Imagine the problems you can tackle. For me what is exciting about the ‘Internet of Things’ is the application of all those internet principles and skills to physical world problems. Skills of design and code that used to be confined to tackling problems in the virtual realm can now be applied to the physical. Energy, safety, health, fitness, food, education, and much more; the possibilities are endless.

This theme has a particular relevance and resonance in Greater Manchester, a place where great leaps forward in the science and technology of the physical world and the virtual have been made. Officially there are 45,000 people in the digital and creative sectors in Greater Manchester.

Imagine what we can do with the world if our digital skills are increasingly applied to physical problems.

]]> 0
The New Digital Divide: Makers and Consumers Wed, 14 May 2014 10:10:08 +0000 I have a new laptop, at least for the length of this trial. The team at Dell have loaned me an XPS and I have to say it’s flippin’ awesome. OK I don’t have to say that – it wouldn’t be much of a trial if I did – but it’s true.

In between trials of new machines I operate on a five year-old desktop or a six year-old laptop. Both are perfectly functional but limited. The laptop performs admirably for its age, thanks to a lightweight Linux OS, but unfortunately its frame is anything but lightweight: more luggable than portable. The desktop is very comfortable to use with its big screen and a posh mouse and keyboard (thanks to a never-ending trial from Logitech). But a lack of RAM means it becomes a little ponderous when running lots of Chrome windows or anything else taxing.

By contrast this new laptop has everything: slender metal frame, Core i7 processor, buckets of RAM, and a battery that lasts so long I’ve stopped bothering to carry the charger. Even if I use the laptop to charge my phone and other devices it seems to get me through days of work.

I can’t definitively say this is the best machine out there for the money – it’s not that sort of test. But its sheer capability has reminded me of something: the dramatic difference that remains between a ‘real’ computer and a tablet or smartphone. For me this is an increasingly important frontier in the digital divide.

Makers and Consumers

Because I’m using this machine as my main device for the period of the trial, I’ve had to install my regular software stack on it. I could automate this process and probably will in future, but it’s actually quite interesting to install things as and when the need arises. It makes you very aware of the software on which you’re most reliant. In the past this approach has also made me very aware of the (un)availability of an internet connection when you need one. But on my second trip over the Pennines in as many weeks, I find myself happily downloading hundreds of megabytes of software over Three’s 3 and 4G networks. There’s a reason I have an unlimited contract…

The software I have installed started with a browser or two: Chrome (for browsing, mail and apps) and Firefox (for web design and testing). Then a text editor (Bluefish) and version control (Git). Then an office suite or two – LibreOffice and MS Office. And finally the Arduino IDE for more development on my home automation system and robots. I’ll probably add GIMP and Inkscape at some point but I haven’t needed them yet.

Now, browsing I could do on a tablet. Email too. I’m pretty adept at typing on a screen and have a nice dinky Logitech (again) keyboard for my iPad Mini. But code? Version control? Spreadsheets? Document design? Presentations? None of these are things I would like to tackle on a tablet today. For these things a laptop or desktop is ideal. In fact, they are necessary.

Consumption not Creation

Tablets and smartphones today are tools of communication and consumption, not creation. There are two reasons for this. Firstly, the interfaces. Ancient though it may be, the keyboard and mouse combination remains our best interface to most of the tools of digital creation.

The exceptions are audiovisual: tablets can competently capture audio, video and images, and using a stylus designers and artists can draw on them. But for most other tasks the touchscreen interface lacks fidelity: even if you can capture your words, manipulating the documents you’ve written is a massive PITA.

This is not just the fault of the screen and fat fingers: the user interface trades off capability for ease of use. This is the second reason that touchscreen devices are limited. The operating systems and apps that sit on them are designed to be incredibly intuitive and usable with a few touches and swipes. This is great, but it means they are usually simplified to some extent, leaving you without the power and control that you might be used to on a desktop or laptop. The power and control you need to be a true digital creator.

The Real Digital Divide

The digital divide is generally accepted to mean the gap between the connected and the unconnected. Those with and without access to the internet. Today in Britain 83% of households have some form of internet access, with the majority of those that don’t reporting that it is lack of need/desire that stops them, rather than a lack of finance or skills. A large proportion of those are over 75. In short, over the coming years the digital divide by this measure is likely to narrow significantly, leaving a hard core for whom skills, disability and cost are the issues. These are issues that can and should be tackled, since it is increasingly hard to navigate modern life without internet access. Not having access can put you at a significant disadvantage from a consumer perspective, as much as anything else: things bought online are often cheaper.

With my futurist’s hat on though, I am more concerned about a different digital divide. That between digital consumers and digital creators.

Putting a connected tablet or smartphone into someone’s hands and equipping them with some basic skills may enable them to participate in digital life. They can use eGovernment services, shop and ‘join the conversation’ on social media. But they can’t make an awful lot – at least not anything of business value. As I pointed out above, these devices are great for audiovisual media but YouTube and Instagram are awash with wannabe Spielbergs and Baileys. Only so many people can succeed in this field as Jamal Edwards has.

The digital divide we should be measuring is that between those with access to the skills and the technology to create new products and services, and those with the capability to consume them.

The Three Cs

This is a much harder measure. But I think it is possible. In discussions around the future of work and skills, I have come down to a simple ‘Three Cs’ of skills that are vital for participation in tomorrow’s increasingly digital economy. And no, one of them is not ‘Coding’ (at least not exclusively).

Curation is the ability to find, qualify and absorb information. It’s about search skills and fact checking, knowing the difference between something being written and something being objectively true.  It’s about being able to put that new knowledge into context. The ability to do these things fast, effectively and reliably is vital.

Creation is about synthesis and ideation. The ability to take information that you have discovered and use it to create something new. That might be code, it might be language, it might be design, it might be a new 3D-printed or micro-controlled product.

Communication is about your interface to the rest of the world. People remain at the heart of an increasingly digital society and economy. You need the personal and technical skills to be able to make your arguments and ideas compelling.

All of these skills can be taught and tested.

Breaking Barriers

If we are to break down the true digital divide, the one that threatens to bar many from economic participation in the growing digital society, we need to focus on issues greater than simple connectivity. We need to recognise that the increasingly dominant touchscreen devices that are becoming ever cheaper and easier to use, will not in themselves help us to bridge the gap. In fact they threaten to widen it. As touchscreens give way to voice and gesture interfaces, and we are further abstracted from the underlying technology, the threat only increases.

At home, at work and in education we need to understand the true nature of the digital divide and change our behaviour accordingly. The Three Cs can be taught and have to be, not just at school but beyond and throughout life.

We need to ensure that everyone has access to the tools of creativity, as well as consumption.

]]> 1
Never Trust Someone Who Offers the Same Answer to Every Question Mon, 12 May 2014 09:29:55 +0000 One of the challenges of being a futurist is communicating the difference between what you believe to be true, and what you would like to be true. I try to maintain a good distance between the two.

DaisyWired14Last week I was speaking at the Wired?14 customer event for Daisy Group PLC, about the impact of  technology on our personal and working lives. I highlighted that the link between economic growth and employment has been broken since 1999, referencing The Second Machine Age. After centuries where mechanisation and automation has been part of a cycle of creative destruction that ultimately generated greater overall wealth, digital technology seems to be concentrating wealth in the hands of few and destroying more jobs than it creates.

I could be accused of being a bit of a cheerleader for technology: I believe that it has been a huge factor in our increasing health and wealth over the recent past. But I’m not suggesting that it is an unalloyed good: technology presents issues we have to tackle.

I was collared after the event by someone who took issue with my analysis. He believed markets would solve the jobs problem – in fact he believed markets could solve every problem. I suggested the state would absolutely have a role to play and intervention would be required at some point if we were not to have a deeply unequal society (even more so than today). My challenger scoffed and painted me as on old-school socialist in favour of some form of centrally-planned economy.

Arguing against what you would like someone to be saying is often easier than arguing against what they are saying.

I was raised in a fairly lefty household and retain a lot of the values that I absorbed there, particularly around the role of the state. I’ve since spent nearly fourteen years entirely in private business – nine of those self-employed and growing my own businesses. I recognise where the market can do good things. What I don’t believe is that the state can solve everything or that markets can solve everything. To me either of those views is a kind of dangerous fundamentalism, little different to the worst excesses of religion.

Believing that there is one answer to many problems is generally a huge error.

Being Charitable

At the end of the week, I headed off to Edinburgh to run a workshop on the future of charities for the SCVO. Here I highlighted the growing role of technology in work and home life, and how it is transforming each. As often happens, this was taken by some as an argument for the increasing role of technology in both of those spheres. “But human contact is important. We can’t do what we do unless we’re face to face,” is a refrain I hear often in these sessions.

This may be true. But that doesn’t change the fact that there are other organisations competing in the digital sphere for the same funds and few seconds of attention that are the lifeblood of many charities. The evidence would suggest that we have a limited reservoir of mental and financial capital to donate. Though they may not be able to deliver their most important services digitally, charities have to compete digitally in the worlds of campaigning and fundraising if they are to maintain their support, and therefore the ability to conduct their work in a face to face manner.

More than that, the current financial climate would suggest that charities need to find ways of introducing digital means into their most fundamental operations. Because they may well be trying to support as many or more people with reduced resources. Anyone can fund-raise now. JustGiving and other platforms give individuals the campaign tools that would only have been available to established charities in the past. Campaigns like the #NoMakeUpSelfie or Stephen Sutton’s appeal, show the incredible speed with which new media can attract funds – funds that will then not be given to other causes that might historically have received them. More than one charity has told me that their individual donations are going “off a cliff”.

One Size Does Not Fit All

In each of these sessions and others, for businesses, local government and charities, I have talked about Stratification: my model for how successful modern businesses increasingly seem to be organised, and a model I believe can be applied to lots of different organisations. What I don’t propose is that there is a single template that can be simply copied and pasted between different charities, businesses and councils. There might be a common idea that can inform the approach in each case, but each case must be handled differently. Just like the answer is not always market or state, the answer is not always “What would Amazon do?”

This is the reason I’m currently exploring ways to expand my consultancy operation – without building a traditional consultancy business. There’s lots of demand for people who can apply original thinking to challenging problems, particularly people who are literate in the current and next generation of technology-driven change – i.e. applied futurists. While my focus is on research and writing, I really like the ‘applied’ part of applied futurism: tackling challenges and driving innovation.

There’s also a broader point here: consultancy is a time-based business. One that it is hard to scale, completely in contrast to the software product businesses that are driving the current wave of automation – and arguably widening the gap between rich and poor. In consultancy you largely get paid for the work that you do just once. If you create a successful commercial software product, be it Facebook or Microsoft Office, you get paid many times over. In industry parlance, product-based businesses scale better than time-based businesses.

As products are increasingly commoditised and standardised, and even made open source, I can see the (im)balance between product and service beginning to change. There will always be new products to be created that can be sold at a high margin – I’m not naïve enough to believe that we have approached the point at which everything has been invented (and I’d be very disappointed if I thought that were true). The market is probably the right way to price, and incentivise the creation of, these new products.

But perhaps there will be an increasing base of products, devices, software and services that will be either free or cheap, because they are open or in highly competitive markets. Products and services where the only real money to be made is in the customisation, consultancy or personalisation. This is arguably already the case for the web.

Technicolour Riot

Unless the last fifteen years of data is a blip – and that’s possible – I don’t think the market is likely to solve the onrushing jobs challenge. I don’t think governments can either. Nor can the open source movement, standards bodies, charities or anyone else.


The joy of this planet is that the answers are rarely black and white. They’re not even shades of grey. They are a glorious technicolour riot of diversity. Any ideology that suggests one answer to the multitude of challenges and opportunities facing us, is guaranteed to be wrong, whether it proposes human beings or technology, markets or states.

If someone tells you they have one answer for everything? Don’t trust them.

]]> 0
Stratification: Has the Integration vs Specialisation Question Been Answered for Tomorrow’s Business? Fri, 04 Apr 2014 08:24:08 +0000 In the future, if you want a big and profitable business, are you better served focusing on your core value or owning every step in the supply chain, operating every component? It’s an age-old question, the answer to which has been subject to wide debate and big swings in fashion, probably since the earliest enterprises began.

Each time the issue is raised it comes with new buzzwords, different components having their own terminology for moving in or out of the organisation. Call centres (‘outsourced’), development (‘offshored’), telephony (‘hosted’), software (‘SaaS’), hardware (‘cloud’), etc. Logistics, marketing, finance, property, retail, manufacturing, design, and even the original product and service concepts themselves can all be offloaded.

In the 90’s and early noughties it was all about moving functions out of the business, like call centres and software development. And then came the turning of the cycle, the inevitable backlash. Companies now make a big point of having ‘UK call centres’. I’m hearing grumblings about outsourced development that may see similar slogans slapped on locally-built software.

Purity of Principle

The reality is that few businesses can exist in either ‘pure’ state: entirely lean and focused, or monolithic and integrated.

Take Apple, for example, a famously vertically-integrated business and one that bucked the trend through the 90s and noughties. Apple owns and controls a large number of the steps in the content supply chain. The design and manufacture of the devices, the software that runs on them, the shops that sell them and the online shops that sell the content for them. But it still outsources large parts of its manufacturing. It has to source components from a wide number of suppliers, some of them competitors elsewhere in the market. And it opens up its own software and content marketplace to a large number of suppliers.

Tiers of Business

It is this tier of the business that I find most interesting. Because Apple both competes in this marketplace with its own applications, and allows others to enter – even if the applications compete directly.

Amazon does the same, operating a vertically integrated business but allowing others – even competitors – access to certain tiers. Take the layer diagram I use to break down so many things into manageable chunks.

Stratification Layer DiagramAction Layer: This is the consumer.
Presentation Layer: This is, the online shop with which so many consumers are now familiar.
Processing Layer: This is Amazon Web Services. What few consumers know is that Amazon allows other companies, including other online retailers, to use its incredibly powerful and cost-effective computing platform to host their shops, applications, databases, and files.
Connection Layer: Amazon’s logistics operation is enormous and expanding. It doesn’t serve other retailers today, but it could.
Collection Layer: Likewise Amazon’s procurement operation is not yet open to others. But could it be?

This situation is replicated across technology businesses in hardware and software alike. Companies are increasingly tiering their organisations and recognising that those tiers have to be treated as standalone markets. There is no problem dealing with suppliers or customers in one tier who might be competitors in another. Witness Samsung making screens for itself and Apple. Intel fabricating chips featuring arch rival ARM’s core processor designs. Microsoft finally releasing Office for iPad.

Low Friction Interactions

What’s most interesting about this ‘co-petitive’ trend is not that it is happening in these different layers. This has been the case for some time – for example, Microsoft has long made software for, and been an investor in, Apple. What I find interesting is how the layers interact.

To give a practical example, I’m building a dashboard for my business. This will show me the key performance indicators for my weird hybrid of a business, at a glance. To do this I’m using a very nice framework called Dashing, built by techies at the ecommerce platform Shopify (a company which itself could be a nice case study in stratification). Because Dashing is built in a particular language (Ruby), that my usual web hosts don’t support, I had to try an alternative. Having heard good things I checked out I set my application up on Openshift and got it running before I’d really read around what it is.

Put simply, OpenShift acts as a smart management/integration/user interface layer over Amazon Web Services – Amazon’s web and application hosting platform. When I add my application to OpenShift what it actually does is set up hosting for me on Amazon’s platform. It can do this, seamlessly and transparently, because the interface between the two systems is entirely automated and programmatic. Even though a new order potentially means the reconfiguration of physical hardware, the conversation happens entirely in data.

This interface between the two companies and the two systems is incredibly low friction. Commands flow in one direction, costs and services flow back in the other. It means Amazon can afford to sell its services to lots of people in lots of ways, because the cost of reaching them and supporting them is incredibly low, further enhancing (or at least maintaining) the low prices. Because interactions happen in real time, and are data driven, monitoring and resolving any breakdowns in the service is quicker and if not easier, then at least based on good evidence – not always the case in human to human interactions.

People as a Service

Stratification typically applies to business units or functions. These can be neatly packaged with an interface on the outside that allows the relevant information and resources to be passed back and forth. But increasingly people are becoming package-able units too.

Through my conversations with Tim Lovejoy on the future of work, I’ve been looking a lot at the increasing drivers towards self-employment. At one end of the economy, self-employment is almost being mandated. Under-employment and zero hour contracts are driving people to juggle multiple ‘clients’, and supplement their income with other work. At the other end self-employment is ever more appealing: software and web-based services take much of the pain and cost out of running your own small business, and it remains highly attractive from a tax perspective. Businesses are even encouraging senior employees to move to more flexible working, recognising the potential cost savings on both sides and more importantly the valuable learning that executives bring back from other businesses.

This is essentially stratification of the workforce. Employees are becoming thin layers, or segments of layers in their own right. Via corporate collaboration software or virtual work exchanges, each has their own low-friction, data-driven interfaces to the other layers.

What is crucial in an arrangement like this is ensuring that the business retains access to the workforce that it needs, when it needs it. And that the workforce maintains a relationship with the business that supports their loyalty and motivation. Achieving this in a flexible role for a senior executive is one thing. Achieving it for a large workforce on zero hour contracts, quite another.

Advantages of Stratification

I believe that stratification via a low friction, programmatic interface between networked business units right down to the individual scale, is becoming an increasingly visible trend in the structure of business. And for good reason. It improves the interaction between layers in an organisation, and it allows those layers to be opened up to third parties. This approach, and the business philosophy attached to it, has a role to play in the development of many organisations across the public and private sectors.

Some of the key advantages it offers are:

Agility: One of the most common questions I am asked in consulting engagements is how companies and organisations can become more agile. By codifying and streamlining their interactions, stratification allows the different parts of an organisation to be move around more easily. New interactions with other departments or third parties can quickly be added to support new products, services or enhancements.

New Revenue Streams: In the organisations I deal with there is often untapped value, in various repositories of data or well-developed internal processes. Stratification appears to increase the visibility of these hidden treasures but also creates the means by which they can be accessed.

Reporting: Stratifying the organisation forces the automation of certain systems that may have held out until now, including reporting. I haven’t encountered many organisations where good quality performance data is reaching the people who need it in a timeframe that many would consider ideal – i.e. real-time.

Disadvantages of Stratification

By its very nature stratification lays bare the inefficiencies in a business and drives automation. As many people are noting now, digital automation may drive productivity but it doesn’t create jobs. Like any business change, stratification will encounter resistance and this resistance will be reinforced by the level of transparency it creates around the activities of departments and individuals.

Principles of Design

Where organisations have succeeded in stratification there seem to be some clear design principles emerging.

Codify Inputs and Outputs: The first step is to break an operation down into clear units that have defined and repeatable inputs and outputs. This won’t be possible with all pieces of the business, but in some cases this may highlight where one unit is being tasked with multiple jobs that ought to be split. For example, it can be hard to operate efficiently when one team is tackling parallel work streams with very different work flows, time scales and deliverables.

Systematise Processes: While most of our work (though not all) may be computerised, much of it is not systematised. The work flow in departments is not a defined, documented process. Instead knowledge is locked in people’s heads. Processes are carried out using general office software packages and email, rather than tools that encapsulate the process in themselves. These tools can guide users through a process and check for errors along the way, increasing quality, reducing completion time and enabling new staff to be trained very quickly. Most importantly systematising the process automates the production of performance data, giving management much clearer insight into each corner of the business.

Open Data: Data is the language of stratification. This is not a diminution of the importance of human relationships in minimising the friction in a business, but a recognition of the fact that the speed and precision of data can overcome some of the limitations in a purely ‘meatspace’ process. Opening up restrictions and overcoming fears about the publishing of data, internally and externally, privately and publicly, is a huge part of the stratification process.

Publish APIs: An API is an application programming interface. It is the means by which commands and data can be sent and received from a piece of software by another piece of software rather than a human user interface. Think of it like the pins and holes on a piece of Lego that allow it to connect to others. Just like Lego, with an API to your software – or your departments – it is much easier to rearrange the building blocks of business into new configurations, supporting new processes or creating new services or products.

Applying Stratification

Stratification is not a wholly original idea, more a synthesis of some key trends in the development of organisations and businesses over the last thirty years. It is in itself a ‘re-combinatorial innovation’. That gives us confidence that even though the idea is at an embryonic stage, at least its component parts are sound.

Where stratification perhaps most extends existing thinking is in the idea of applying the principle of an API to a department or even a person, rather than just a piece of software. This will not be universally popular. But it’s important to repeat: this is not about valuing computers over people, or diminishing the role of people in an organisation. We are not expecting people to behave like software. Rather as has been pointed out in multiple books and studies (see, ‘The Second Machine Age’), there are things that machines do better and there are things that people do better. Recognising this and separating the work accordingly has long been a recipe for increased productivity.

Capturing process in software, using that captured process to handle inputs and outputs, and deliver good performance metrics makes a lot of sense. Using this capture process to create a wrapper around departments that can then be understood and manipulated as logical blocks in the organisation makes sense. Allowing third parties to access the now open, standard interfaces that connect these blocks together also, we believe, makes sense. Inside that well-defined framework should be greater, rather than lesser, opportunity for people to do the things that people do best: interact, innovate and improve the business around them.

]]> 0
Humanoid Robots: The User Interface for the Internet of Things? Tue, 11 Mar 2014 11:20:54 +0000 Two of my projects are merging. This morning, inspired by a second viewing of Iron Man 3 (yes, I want to be Tony Stark) I finally finished assembling RoboRaspbian. And realised that he should actually be part of Project Santander. Cue more modifications…

Quick recap: I like to build stuff, partly for fun, partly to exercise my brain, and partly to test ideas out about the future. RoboRaspbian started relatively simply: I found an original RoboSapien toy, minus his remote control, in a charity shop for £3 or so. Seemed like a bargain but I wanted to be able to control him.

I looked at replacing his microprocessor but this seemed unnecessary when I could get all the movement I wanted just by sending the right commands to the existing one. I found someone had turned a Raspberry Pi into a universal remote control capable of outputting the right commands, and so the project became ‘strap a Raspberry Pi to the back of a RoboSapien’. With some relatively simple electronics to bridge the two (read LOTS of trial and error), RoboRaspbian was born.

Then I thought: wouldn’t it be nice if I could make him talk – more than just his original, limited vocabulary (mostly yawns and farts). Get him to converse with the kids. I’d already done this on a previous robot project, Sammy. So I added in the text-to-speech engine Flite, and added an amplifier to the small amount of spare space inside the robot’s chest. This hooks the audio output of the Raspberry Pi into the original speaker, matching the volume of his in-built sounds.

So now I have a talking, gesturing robot. Nothing particularly smart about him though: he is entirely human-controlled. But hang on a second: I’ve spent the last few months rolling out sensors around my house*. They could feed him with all sorts of interesting data. Getting him to tell me when certain areas were too cold, or when humidity levels got too high would be really cool. Plus he can gather all sorts of information from the web: new emails/tweets etc.

So, we have a new plan: humanoid (ish) robot becomes the user interface for the Internet of Things. Down the line I can add a microphone and make the voice interface two-way using something like PocketSphinx or Google’s Speech API.

This will require one simple (ha!) hardware modification: if the robot is to be on 24/7, I’m sure as hell not running him on batteries.

*Note: As I write this, the blog is currently rather behind vs my actual progress with the home automation project, which is stably monitoring multiple conditions in four different rooms,
]]> 0
Take a Picture, With Your Mind. Wed, 05 Mar 2014 13:00:15 +0000 Imagine taking a picture just by thinking.

You train a neural interface to recognise the patterns of activity that are fired when you hit the shutter button on a camera, or your phone. Then when you think that thought, the neural interface triggers snapshots from discrete – and discreet – wireless cameras distributed around your body. One in your glasses, one in your shirt button, one in your shoes.

Software in the cloud stitches the images together into a multi-megapixel whole and works out what the likely focus was meant to be, dynamically polishing the output, sharing it to your social streams and storing it for posterity.

This isn’t some wild sci-fi fantasy. It’s a very close reality.

Neural interfaces are already consumer-items, available for just a few tens of pounds in gaming systems. Recognising the same brain patterns being repeated should actually be relatively simple.

At Mobile World Congress last week Rambus showed me a camera the size of a pinhead. It needs no lens, and will cost less than 20p per unit once it is manufactured in volume.

Wireless data standards for short range transmission advance apace. Power requirements at the personal area range are low. And with the demonstrations the Alliance for Wireless Power showed me last week, a wireless charging unit could keep button-sized batteries powered up all day.

Send the images up to the cloud over 4G – it doesn’t have to be instantaneous if you’re not stood there holding your phone and waiting – or Wi-Fi. There’s loads of computing grunt on tap and automatic post-processing is already well developed.

This is real. The question is, do we want it?

People are already uncomfortable with Google Glass, but that stands out a mile. What happens when your smart wearable devices disappear into the fabric of your everyday clothing – a theme to which I keep returning because it is imminent.

It’s up to us to discuss this stuff and set some rules, if we want them,

]]> 0
Smart Cities: We Need to Talk Sun, 02 Mar 2014 09:16:22 +0000 Amid the hype and bluster of Mobile World Congress it is refreshing to hear someone admit they don’t know the answer. Francisco Jose Jariego Fente is Telefonica Digital’s Industrial Internet of Things Director. The question he willingly accepts he can’t answer is admittedly a tricky one: what is the business model for smart cities?

Telefonica has more evidence than most for what the answer, or answers, might be. Its project in Santander has proven there is little money to be made in the hardware: the city rolled out 12,000 sensors funded by a relatively small EU1m from the EU. And the sum of the data collected from those sensors, just 5MB per day, similar to a single photo or MP3 file, suggests there is very little to be made in its carriage or storage.

The biggest challenges, and hence the biggest potential revenues, come in processing and presenting the data in a useful form. This is where Telefonica has focused its efforts and is looking to commercialise the learning from the Santander experiment. IBM too has recognised that this is where the value lies.

But this value only becomes tangible when the rest of the smart city ecosystem is in place. Cities are complicated. They are managed by multiple authorities and commercial parties. They evolve constantly, reacting to the needs of their inhabitants. And those inhabitants themselves, who in many ways represent the city much more than its buildings or infrastructure, have a say in how it develops: any executive control is limited.

Building a smart city on a green field site like South Korea’s Songdo is one thing. But there are huge drivers to smarten all our cities. And that means retrofitting technology, processes and partnerships to an existing, evolved organic environment. One model isn’t going to fit every city. Making it happen will be a process of negotiation, integration, iteration. And there will be lots of different parties involved: political leaders, civil servants, service providers, technology companies, health services, police forces, property owners and most important of all, the citizens themselves.

Brokering a framework that keeps all of these people at least relatively happy, while delivering on the promise of smart cities is no small task. It will only come through dialogue. But it’s a conversation we need to have. Because the promise of smarter cities is too great to ignore.

In the first instance there is simply lower costs, both financially and to the environment. There are lifestyle benefits: less traffic, quicker parking, more efficient public transport. Taking things a step further, there are advantages to planners: recognising a noise problem in one place might inform a change in planning to a new building nearby, perhaps requiring materials that absorb or deflect sound, or the planting of trees as a screen. Ultimately, there is the prospect of properly understanding our cities and the interactions that make them live, so that we can make more informed decisions about their future, in local government, in corporations, and as individuals.

Smart cities have long held promise, but the complexity of the problem they present has retarded their progress. To get things moving, as we need to do, a broad and open conversation between all of the interested parties is required. To agree how the interactions will be managed, and vitally the costs and rewards will be divided.

]]> 0
There’s No Such Thing As A Digital Industry Wed, 05 Feb 2014 12:45:33 +0000 Last night was spent debating Manchester’s future as a home to technology innovation, with representatives from the city leadership, technology, telecoms, law, and finance firms. One issue really stuck with me. Everyone was keen on the idea of promoting the digital industry but there was frustration at the lack of a common voice for this sector.

The problem is this: no-one can speak for the digital industry because there is no digital industry. Not in Manchester. Not anywhere. What is loosely grouped into ‘tech’ or ‘digital’ is actually three or four (possibly even more) very distinct sectors with very, very different needs.

In Manchester I’d classify these as ‘Products’, ‘Services’ and ‘Infrastructure’ but you could probably add ‘Materials’ and ‘Advanced Manufacturing’.

Product companies are proper tech start-ups. Companies that are incubating an idea with great potential to scale. They need an environment to network and meet to start with, so that teams can naturally assemble. Then above all else they need time. Time means the right sort of finance, and low overheads: office space, connectivity etc. Once they reach scale they need access to talent: generally high level talent with specific, technical skill sets but also sales, support, marketing and creatives.

Services companies are largely marketing/digital agencies of one form or another. The skills they require are very different, as much creative and inter-personal as technical. These companies have limited potential for scale: it’s a highly competitive market. Growing means adding people and the cost of managing those people rapidly starts to diminish the focus of the founders. The best hope is reasonable scale, stability, and good margins, and ultimately perhaps a trade sale to a local rival or national network. What they need is opportunities to sell and access to contracts, from the public sector and large local companies.

Infrastructure companies might serve the start-ups, or the agencies, or any other businesses around Manchester. Depending on their particular focus the challenges might be access to power, the cost of laying fibre, or competing with unfairly advantaged national players. They need technical skills but those skills are generally very different to those required by the product or service companies.

Manchester’s agencies have a good representative body. But that body doesn’t speak for start-ups, and it doesn’t speak for the infrastructure players. In fact I don’t know of any single bodies that claim to speak for those groups and their needs.

If Manchester, and the UK as a whole, is going to have future economic success powered by technology-driven businesses, then those businesses need to be understood for what they are. Not conflated in groups under meaningless terms like ‘the digital industry’, ‘the technology sector’, or ‘the knowledge economy’.

]]> 1