Wednesday, March 24, 2010

Two reasons why companies fail their customers

I flew back from San Francisco a while back on KLM. The flight was delayed 5 hours. KLM managed to make this unfortunate situation worse in ways that led me to wonder why so many companies fail their customers. Or rather why so many companies let opportunities to delight customers slip by.

The cause of our delay was that our 'plane was held on the ground in Tokyo by a typhoon. Once airborne it had to fly first to Amsterdam and thence to San Francisco. KLM Operations knew of the knock-on effect of the delay to our flight more than a day in advance of our scheduled departure. And yet, KLM failed to tell any of its customers of this delay until we were at the airport. Their website showed the flight as on time, even their customer service people told me it was on time when I 'phoned before I left for the airport. If I had known of the delay I would have stayed in the city to have lunch. As it was I spent 5 dull hours in the airport waiting.

What a missed opportunity. The delay was unavoidable of course, but wouldn't it have been great if KLM had contacted us, the passengers, to tell us it was delayed and to suggest a new, later, check-in time? I would have thought "wow, what a company, they really think about their customers".

Mistakes and delays happen all the time of course. A smart company understands that whenever this happens, they have a golden opportunity to delight their customers. Expectations are usually low, so even small things can make a big difference. A dumb company just shrugs its shoulders and apologizes. What a missed opportunity.

A friend on the same KLM flight was connecting in Amsterdam and flying onwards. She missed her connection of course and KLM rebooked her on a later flight - much later - an additional 5 hour delay for her. Her flight was due to arrive after 10 p.m. She then needed to catch a train home. Now late-night Saturday trains in most European cities are not pleasant places to be, especially for a woman traveling alone. The trains are full of drunken party-goers.

We explained this to KLM. We asked one of their ground staff if they would book her on an earlier flight, if necessary with another airline. They refused. We asked if KLM would pay for a hotel so that she could travel home the next day. The reply was that it was not KLM's responsibility. How we chose to get to and from the airport was up to us and had nothing at all to do with KLM. End of conversation. Goodbye.

What a missed opportunity. A smart company understands that customer engagement often starts long before they buy a product or turn up at the airport. And they understand that along the way there are great opportunities to delight the customer. A dumb company rarely looks beyond its own borders and hides behind its policy and procedures.

While I was in San Francisco I went to a new restaurant called Saison. It is located in an "interesting" neighbourhood - the taxi driver who drove me there wouldn't leave until he was certain I was at the right place! After the meal, I asked them to call me a taxi to take me back to my hotel. The meal was terrific but imagine my surprise and delight when I got back to the hotel, tried to pay the taxi driver, and was told the restaurant had taken care of it. What a great customer experience. They didn't advertise this service, nor was I expecting it. It makes me love the restaurant even more and, of course, I would happily recommend it to anyone.

Sunday, October 11, 2009

Notes & Thoughts from iPres 2009

This was the Sixth International Conference on Preservation of Digital Objects (iPres). These conferences "bring together researchers and practitioners from around the world to explore the latest trends, innovations, and practices in preserving our scientific and cultural digital heritage". This one (iPres 2009) was hosted by the California Digital Library (CDL) and took place in the conference centre of UCSF Mission Bay in sunny San Francisco. CDL did a really superb job in organising the event.

The iPres community is made up of people from academic libraries, national libraries, national archives, service providers (like ExLibris, Tesella, Sun, Ithaka, LOCKSS etc), web archivists, preservation researchers, and me.

The following are some notes I made during the conference and some random thoughts that occurred to me while listening to the presentations and tuning in on the #ipres09 Twitter channel. Other blogs covering this event were: Digital Curation Blog, Daves thoughts on stuff, FigoBlog (in French) and Duurzame Toegang (in Dutch). Photo's from the event here, and here.

Day 1

David Kirsch (University of Maryland) gave a thought-provoking keynote address on the need to preserve corporate records: "Do corporations have the right to be forgotten?". There are many reasons why corporate records are lost or destroyed. Often it is simply that record keeping has a low priority in the corporate world - few companies have formal policies for digital preservation. Furthermore, lawyers tend to advise companies to destroy records to avoid possible future liabilities. David argued that there is a public interest in preserving the records of corporations for research purposes. He had some great ideas on how this might be achieved. One of the ideas was to give archivists the option to claim company records in bankruptcy courts. Brilliant.

Panel discussion on sustainable digital preservation

The Blue Ribbon Task Force has been studying the issues of economic sustainability for digital preservation. Their final report is due in Jan 2010 (the interim report is available here).

Many digital preservation activities are funded through one-off grants or discretionary funding. This is obviously not a sustainable source that would guarantee long-term preservation - someone has to pay to keep the servers humming and the bits from rotting. There is also the "blank cheque" problem: few funding bodies are comfortable agreeing to support preservation of an unknown amount of digital data for an indeterminate length of time.

A few groups are beginning to provide paid-for archiving services. One of the most interesting is CDL’s easy to use Web Archiving Service.

Abby Smith noted that a key discussion area for the task force had been the handover points when stewardship of information passes from one party to another This was one of the most insightful moments of the conference for me: the importance of designing preservations systems in such a way that they can be passed on to someone else to continue the stewardship of the data. This seems to me a much more manageable problem to work on than how to preserve an infinite amount of data for an infinite length of time. Handover of stewardship was one of the main drivers behind the development of the DOI system - how to make sure that digital objects remain findable when ownership of the rights change. I wonder if the registration agency model that the International DOI Foundation (IDF) uses might be helpful here (Disclosure: I am Chair/Director of IDF).

Henry Lowood (Stanford) spoke about preserving virtual gaming worlds. My first thought was why would anyone bother to preserve virtual worlds. I realized however that the world is full of collectors of stuff; preservation of anything starts with someone who cares enough about it to spend time and money on building an archive.

One of the big challenges in preserving multi-user games is that the game environment itself has been built by many gamers. Figuring out who built what element and asking them for permission to archive is a major headache. Another problem is that it is not enough simply to take screenshots since the way the game has been played is part of the essence of what needs to be preserved. In other words, the game content and game engine are indistinguishable from one another. It seems to me that this may be the future for all content. There will be a time when the content alone is almost meaningless without the context in which it was created. If we want to preserve the content, how do we capture the context along with it? And in a world of multi-user-generated content, how will we ever find out who created what piece of content? Maybe we need a digital data donor card, where you can formally donate the data you have created in your life to the public domain, and record this fact permanently so that future archivists can mine your data with your posthumous permission.

Reinhard Altenhoner, German National Library argued that most digital preservation projects have been single initiatives, creating safe places for information. Few have looked at the wider e-infrastructure implications. In what environment do the islands of activity operate? Where is the common approach? He proposes a service layer architecture for digital preservation - decoupling system components and working on interoperability, open interfaces etc.

And that is exactly what CDL are doing. Stephen Abrams, in what was for me one of the best presentations of iPres, told us how. CDL believes that curation stewardship is a relay race, we should concentrate on doing the best job now then handing over to someone else to continue the stewardship. With this in mind, their approach favours the small and simple over the large and complex, and the re-use of existing technologies rather than creating new ones.

They have come up with “interoperable curation micro-services” - a definition of the granularity of services for a full preservation system. They group these into 4 service layers, characterized as:




1) Lots of copies keep stuff safe (providing safety through redundancy)


2) Lots of description keeps stuff meaningful (maintaining meaning through description)


3) Lots of services keep stuff useful (facilitating utility through service)


4) Lots of uses keep stuff valuable (adding value through use)

There is much more detail in their conference paper, which I should imagine will be required reading for all iPres delegates.


Pam Armstrong and Johanna Smith from Library and Archives Canada had an interesting story to tell. Some time ago, their national auditor wrote a withering report on record keeping within the Canadian government. They used this as a big stick to drive compliance on better record keeping and archiving. Very cleverly though, they also developed a useful plug-in that made it easy for the government staff to comply.

Lesson learned: if you decide to use a stick instead of a carrot, then make sure you provide some protection so that it doesn’t hurt too much!


Robert Sharpe, Tessella, talked about the results of a survey they had done for Planets (which seemed to me to be very similar to the PARSE/Insight one earlier this year...) An interesting correlation appeared to be that if an organization has a formal policy for digital preservation, they are much more likely to have funding for DP. I wondered whether the answers to this question on the survey were skewed. I mean if you had funding for digital preservation, how could you ever admit on a survey that you didn’t have a policy?

Their final conclusion was that more work needs to be done to fully understand the landscape. In my experience, that usually means the original questionnaire was not especially well thought out nor pilot tested before sending out. Perhaps the survey was more of a plea for awareness for the issues around digital preservation?

Actually I think questionnaires are a really poor way of gaining insights into complex issues - and let's face it what important issues are not complex nowadays? The interesting insights are rarely contained in yes/no answers but in the discussion that goes on in someone’s mind or within an organization in answering the question. I find that understanding why an answer was "yes" or "no" is usually much, much more interesting than the answer itself. Questionnaires also ignore the political angle - I mean what National Library tasked with digital preservation could ever answer “no” to the question “do you have a documented preservation policy?”, even if they do not?

Ulla Kejser, Danish National Library, presented a model for predicting costs. They came to the conclusion that there is strong dependency on subjective cost assessment, either in deciding how to map a framework like OAIS or simply in the prediction of the cost elements.

Interesting that they took care not to assume potential cost savings upon system deployment - they stress that to do that you need an organization that is capable of learning and re-applying that learning if you are to realize cost savings from re-use.

It seems to me that all the costing models suffer from the same drawback: when you scale the number of preserved records to a large number even a tiny estimation error will be magnified hugely. The underlying problem is that it is impossible to predict accurately something that has not been done before. This is a something that is very familiar to anyone involved in agile projects. I wonder if the estimation and planning techniques used there might be useful here.

Day 2


Micah Altman, a Social Scientist from Harvard spoke on open data. He maintains that journal articles are summaries of research and not the actual research results (I think he is missing the fact that articles also contain hypotheses and conclusions, not only summaries of work undertaken). His main point though is that researchers need access to the underlying data, which I wholeheartedly agree with. How we do this is trickier. I think the TIB in Germany have made a great start by defining persistent identifiers for scientific date - see here - but there is lots more to do. It has always been very hard to peer-review scientific data since the reviewer does not usually have access to the software needed to view the data or e.g. to run a simulation.

I think Altman also forgets that publication and dissemination is an annoying necessity for many scientists; it is not something they enjoy nor wish to spend a lot of time on, let alone preserving it. Most researchers just want to do research. Making it easy for them to cite, to archive and preserve is the key, I think.

Martha Anderson of NDIIP noted an interesting observation from Clay Shirky that each element of digital preservation has a different time dynamic and a different lifecycle. His advice was that “the longer the time frame of an element, the more social the problem”. Thus the social infrastructure around digital preservation is more important than the technical aspects. That is certainly our experience with the DOI System.

She also re-iterated the Danish point that learning organizations are key for collaboration.

There was a panel discussion on private LOCKSS Networks. PLNs are small, closed groups of institutes that use LOCKSS technology and architecture to harvest and store data in their domains. It looks like quite an interesting model, providing an open source architecture for institutes to roll their own digital preservation. I do have a concern about the LOCKSS architecture that is probably down to the fact that I haven't studied it well enough yet. I worry about the chaos of multiple copies of multiple resources, that may or may not be correctly tagged or uniquely identifiable. If I find something, how do I know if it is a copy or the original, or whether there is a difference? LOCKSS solves the problem of keeping stuff safe through redundancy, but it seems to me that in doing so it creates some new problems.

Ardys Kozbial showed that Chronopolis do a good job for their clients, with a straightforward, uncomplicated solution for multiple file types. Their dashboard for partners / data providers showing where their data is in the process queue was very impressive.

Christopher Lee showcased ContextMiner - a multi-channel agent that crawls multiple sources for multiple keyword searches. Shows just how easy web crawling has become.

Jens Ludwig, University of Goettingen, reported that Ingest was the biggest cost factor in digital preservation. This was hotly disputed during the discussion and on Twitter. He and his team have produced a guide on how digital information objects can be ingested into a digital repository in a manner that facilitates their secure storage, management and preservation. The draft is available here.


Emmanuelle Bernes, French National Library, gave a terrific presentation on the transition from Library to Digital Library and the impact on the people. She described two phases in the transition. The initial phase was characterized by “digital is different”: digital was driven by experts/early adopters, there was a separate organization, the culture was learning by doing. Now that the Library is fully digital, they have integrated the digital skills and tasks into all areas of the Library, running them as production teams; there are training programmes throughout the library open to all.

Interestingly, she described the transition as a dissemination, a spreading out of the skills learnt in the initial phase throughout the rest of the organization. The key insight for me was that since it is the people who carry the expertise, you must spread those people around the organization if you hope to spread their skills.

They provide multi-day training (7 days in total, one day of introduction, and three 2-day courses) with dedicated training curriculum on digital information management, metadata, digital libraries, digitization, digital preservation. The really clever thing they did was to open these courses up to everyone, not only those people who needed the training for their day-to-day activities, but also those who wanted to be there out of curiosity. Genius.

They have started a project to look at the impact of digital on people, processes and organization, and figure out innovative ways of doing it better.

I think there is much that other (National) Libraries could learn from the BnF, not to mention Publishers struggling with the transition from print to digital. It was a great presentation to end the conference with.







I collected the following memorable quotes during the Conference:

Henry Lowood “The world will end not with a bang but with the message: network error; server has shut down”

Rick Prelinger “Developing a 4 dimensional map of the world - how the world looked in space and time”

Martha Anderson “Collaboration is what you do when you cannot solve a problem on your own”

Adam Farquhar “Do you worry about having too many of your nodes in the same cloud?”

Jens Ludwig “You should define ingest as a transfer of responsibility and not as a technical transfer”

David Kirsch “The are more entrepreneurial ventures started in a year than there are marriages”

Pam Armstrong "Alone we can go faster, together we can go further"





and finally some random thoughts that occurred to me:

Perhaps the biggest steps forwards have been when public and private interests match. Maybe that is the key to digital preservation - finding areas where the interests meet. Tricky since the timeliness / time dynamics are so different. Is there a market for preservation? What would that be?

Struggling with the size of the problem - infinite data stored for infinity is just too big and hairy to cope with. For funding agencies it must seem like an blank cheque, very scary.

I loved the fantastic film footage from Rick Prelinger, he has a dream to recreate the world in space and time - recovering footage from the same place taken over time - see how things have developed.

Pure genius: Prelinger would like an iPhone app that knows where you are and in what direction you are looking, then shows you videos or stills of what it used to look like in the past

Scientific data is meaningless without the environment / system in which the data was created - just like the virtual world game content is meaningless without the engine with which it was created.

Persistence comes from a persistent acknowledged need matched by some persistent funding model - either a business model that is sustainable or sustainable government funding or some other investor - either way there has to be something in it for all the actors - otherwise there is no balance - so maybe identify actors, needs, what is being offered to whom for what? Also one model may not be enough - it rarely is in the rest of the world

One of the attributes of web service architecture is that services self-describe themselves and what they do. Maybe that’s a model for the distributed service model, agree the metadata for self-describing the modules, what they preserve, why and how. You could then design wrappers around non-compliant (legacy) modules to bring them into the same architecture. This is what CDL is doing for themselves, but what if the API’s they are defining become standards across the community?

Collaboration only works when the parties involved are willing and able to learn from each other. Before you plan to collaborate, take the test “are you are learning organization?”. If you cannot learn fro your own people, how can you expect to learn from collaboration with others?

Thursday, July 23, 2009

Corporate Blogging Policy

If anyone were to ask me to write a corporate blogging policy, then this would be it: "Disclose or disclaim but don't stay silent".

Disclosure is very important since it helps the reader make up their own mind. It helps them judge the intent behind the authors' words.

Here is the best example of disclosure I have ever seen. It was published in the Dear Mary section of the The Spectator magazine where readers can ask for help from Mary Killen to solve their problems. Someone wrote in to ask what to do when the person in front of you in an aeroplane reclines their seat leaving you no space of your own. A fellow reader offered this advice:

"May I suggest you advise J.B. of London N1 that next time he is travelling long-haul he should fly Cathay Pacific, whose economy-class seats have a rigid back-shell which does not recline into the space of the passenger behind. The recline is achieved by the seat ingeniously sliding forward instead. Cathay Pacific also has four flights daily from London to Hong Kong, and up to two daily onward flights to New Zealand — all at very competitive fares! I apologise for the commercial, but we’ve gone to great trouble to prevent just the problem JB complains of, and I can’t resist the opportunity to point this out. The problem you could solve for me is the collapse of air-travel demand! Any bright ideas?

T.T., Cathay Pacific"

Even though this is an advert for Cathay Pacific, the full disclosure by T.T. lets you make up your own mind whether to accept it or not. There is no hidden agenda and the facts speak for themselves.





How to archive the web

The following are my notes and thoughts from the Web Archiving Conference held at British Library on July 21st, 2009.

The meeting was organized jointly by JISC, DPC and UK Web Archiving Consortium and attracted more than a 100 participants. The meeting was chaired by William Kilbride, Exec Director of DPC and Neil Grindley, programme manager for digital preservation for JISC. The presentations are available here.

Adrian Brown of UK Parliamentary Archives raised the interesting issue of how to preserve dynamic websites, ones that personalize on the fly. If every page on a website is individually created per user, then what version do you archive?

He also talked about versions across time. For instance, what is the best way to archive a wiki? Take a snapshot every so often or archive the full audit trail? Versioning is an issue when a site is harvested over a period of time so that there is a chance the site has been updated in-between harvests. Something he called a lack of temporal cohesion or temporal inconsistency.

Someone from the BBC noted that: "the BBC used to only record the goals in football matches and not the whole match" Now they realize how stupid this was - hence we should avoid the same pitfall by applying too much collection decision-making to archiving. This touches on one of the main issues facing web archivists: what to collect and what to discard? Most seem to make this decision on pragmatic grounds e.g. do we have permission to crawl or archive? how much budget do we have? do we have a mandate to collect a particular domain?

It strikes me that this is only a problem when there is a single collection point. The reality is that all sorts of people all over the world are archiving the web from multiple different perspectives all at the same time. If enough people / organizations do this then all of the web will be archived somewhere, sometime. So for instance, if there was a referee foundation archiving football matches for training purposes, and a football coaching organization, and the two clubs playing, then it wouldn't matter that BBC only saved the goals. The problem was that the BBC were the only ones filming the matches - a single collection point.

This touches on another main issue: the relationship between the content creator and the archivist. More on that later.

Peter Murray-Rust was quoted several times during the meeting. This is intriguing since he mostly seems to advocate against building digital archives which he thinks are effectively impossible and a waste of time. Instead we should disseminate data as widely as possible. If people are interested enough they will take copies somehow. Or as he puts it "Create and release herds of cows, not preserve hamburgers in a deep-freeze". The wider point here is that web archives should be part of the web themselves rather than hidden away in offline storage systems.

Another big issue here: access. If the archive is fully accessible then how do you know that what you find through Google is the archived version or the live version? Suppose there are multiple copies of the entire web archived through different institutions all accessible at the same time? Sounds like chaos to me. A chaos that only metadata can solve. Or so it seems to me.

I think it would help if there were metadata standards for archiving of websites. It could be a minimum set of data that is always recorded along with the archived contents. Archives could then be made interoperable either by using the same metadata schema or by exposing their metadata in some sort of data dictionary that is addressable in a standard way. If the standards are adhered to it would be possible to de-duplicate archived websites and easily identify the "live" version. It would also be easy to keep track of the versions of a website across time so that a single link could resolve to the multiple versions in the archive.

Kevin Ashley made the point that we should not only collect the contents of the web, but also that we should collect content about the web if future generations are to make sense of the archive. One simple example are the words used in websites that are archived today. Perhaps we need to archive dictionaries along with the content so that a 100 years from now people will know what the content means.

There seems to be a consensus in the web archiving community to use the WARC format to capture and store web pages. As I understand it, this is a format to package and compress the data including embedded images or pdf's, videos and so forth. When the record is accessed then it is presumably unpacked and delivered back as web pages. But what if the embedded file formats are no longer compatible with the modern operating systems or browsers? One answer to this problem is to upgrade the archive files to keep pace with new software releases. Presumably this means unpacking the WARC file, converting the embedded formats to the new versions, then repacking.

Jeffrey van der Hoeven believes that emulation is a solution to this problem. He is part of the project team that developed the Dioscuri emulator. He is currently working to provide emulation as a web service as part of the KEEP project.

If you would like to dig into the history of browsers, go to evolt.org. where you'll find an archive of web browsers, including the one Tim Berners-Lee built in 1991, the one called simply "WorldWideWeb".

Probably the single biggest issue facing web archivists is permissions. Obtaining permission to crawl and archive is time-consuming and fraught with legal complications. The large institutions like the British Library take great care to respect the rights of the content creators; as a result UKWAC are unable to harvest up to 70% of the sites it selects. Others operate an remove-upon-request policy. Edgar Cook of The National Library of Australia reported that they have decided to collect even without permission, they just keep the content dark if there is no permission to archive is granted. Edgar challenged the group: "are we being too timid? - hiding behind permissions as an explanation for why archives cannot be complete". Several people noted that it was difficult to reach out to the content creators; Helen Hockx-Yu said "communication with content creators is a luxury".

I wonder if this is perhaps the most important issue of all: connecting the creator to the archiver. It seems to me that to be successful both need to care about digital preservation. I think Edgar Cook is right, the danger in hiding behind permissions or hoping for strong legal deposit legislation is that it avoids the issue. Content creators need to understand that they have a part to play in keeping their own work accessible for future generations. Archive organizations have a big role to play to help them understand that. For instance, archives could issue badges for content creators to place on their web site to show that their work has been considered worthy of inclusion in an archive.

Kevin Ashley set me thinking about another idea. Suppose there was a simple self-archiving service that anyone could use for their own digital content. In return for using this tool, content creators would agree to donate their content to an archive. It would be a little like someone donating their personal library or their collection of photo's upon their death. Except this would be a living donation, archiving as the content is created in a partnership between creator and archive. Mind you, I am sure that a simple self-archiving tool will be anything but simple to create.

Indeed it is clear that web archiving is not at all easy. There are lots of questions, problems, issues and challenges and this meeting highlighted many of them. Unfortunately, there don't seem to be too many answers yet!

Monday, July 6, 2009

Lessons from bagged salad

I attended a webinair last week given by Professor Ranjay Gulati of Harvard Business School. One of the great examples he spoke about was, of all things, bagged salad. Bagged salad is still one of the fastest growing food retail product lines, despite the e-coli scares of recent years. The convenience factor has been lauded by chefs and nutritionists alike for popularizing salads. In short, it is a great case study for innovation.

Professor Gulati told us that it was not the lettuce growers who had come up with this idea. "How did they miss this?", he asked, "How did they not see the bagged opportunity coming?". His answer was that they were too busy asking their customers how good they thought their lettuce tasted. Too busy with their Salad Net Promotor Scores.

The message to the audience was plain. Don't be blinded by your existing business. Don't rely on metrics that measure how satisfied your customers are with your existing products. If you do, you risk missing opportunities around your product. Study how your customers use your products and always be on the look out for new ways to deliver or package what you produce.

I think this is very valuable advice. Learning how your customers use your products is a great way to discover new opportunities for product development.

I was intrigued by the bagged salad though. I mean putting food in bags seems really obvious. How could anyone not see that coming, however lettuce-obsessed they were? So I turned to Google to find out how bagged lettuce was invented.

It turns out that (of course) if you put pieces of lettuce in an ordinary plastic bag it will rot very quickly. Fine for the trip home from the grocer, no good for shipping and storage. Once lettuce is cut, it consumes oxygen and gives off carbon dioxide, water and heat. Left in the open air it will consume oxygen until it rots away completely. Keeping lettuce fresh requires a bag that will regulate the oxygen and carbon dioxide levels inside the bag.

I could not find out who first had the idea, but people were experimenting already back in 1960's. Nothing worked well enough. Lettuce is very sensitive to wilting and the plastic bag technology just wasn't good enough. It worked for delivery to fast food chains but shelf life was too short for retail sales channels.

Not until 1980 that is. It took almost twenty years to develop a plastic film that was breathable and that was machinable into bags. Along the way they also learned how to fill the bag with nitrogen to lower oxygen and extend shelf life.

Twenty years of research and development to make the idea of bagged salad real.

I like this untold part of Professor Gulati's story. It is similar to the Personalized M&M's I have written about before. The same dogged determination to figure out how to solve the problem. The same kind of technology breakthrough that made it finally possible. The same belief that this the problem could be solved, no matter what people said.

Just in case you are thinking that bagged salad has nothing at all to do with publishing, let me remind you of how it all started. The breakthrough that finally brought printing to the world was the availability of cheap paper. Before that the printing press was an academic experiment: What use was a cheap way to print if paper was prohibitively expensive? If Gutenberg or Caxton were alive today and working in the corporate world, their ideas would never make it past a Dragon's Den, let alone a business case review board!

There's no doubt in my mind that customer insight is key to innovation. Seeing things that customers think or do that no-one else has seen before. Realizing that people like to eat lettuce but that many people find it a pain to wash and prepare. We have great techniques nowadays, such as ethnographic studies, to help us do this and we have user experience experts to help us do it.

That only takes us so far, as the M&M people found out, the lettuce folks learnt and Gutenberg discovered as well. Believing that your team can solve something no-one else has ever solved before is at least as important as the insights that led you to see the problem in the first place.

Never forget that innovation requires dogged determination and sheer hard work. Or as Guy Kawasaki wrote in Rules for Revolutionaries: "create like a god, command like a king, and work like a slave".


Thursday, July 2, 2009

Innovation is a Risky Business

I have heard many, many senior executives talk about failure. "We have to become more tolerant of failure", they say, or "we have to learn to fail" or "fail often, fail early". And yet, when things do go wrong their first questions are often "who is responsible for this? Who's accountable? Who is to blame?". Well intentioned project post-mortems turn into blamestorming sessions.

And I don't blame them!

It's human to think like this. My first reaction when things go wrong is to blame myself. I've tried telling myself that I should be more tolerant of my own failings but somehow I don't listen to myself.

Nowhere is this struggle more intense than in innovation. You must try new ideas out to see if they work. Sometimes, despite all the research, you only know that the idea will work after launching. Innovation is a risky business.

I don't think we have to learn how to fail. I think we have to learn how to understand risk and how to mitigate it, how to manage it.

Here's an example. Some years ago I was working with someone from Accenture. His previous assignment had been as a Product Manager with (if my memory serves me correctly) Vodafone. He had headed a new product development that upon launch was not as successful as had been hoped and was discontinued soon afterwards.

Vodafone had been very clever. They had assessed the risk of this particular idea and decided that it was high. Too high to risk assigning one of their own Product Manager's to lead the development. If it was unsuccessful then it would be highly career-limiting for that person. In their company culture, a track record of success was important for building a career. So they hired a contractor instead.

It turns out this was the normal practice for their product development group. Risky new projects were handled by contractors. Less risky ones by their own staff. If a risky product ended up being successful they either hired the contractor or replaced them with one of their own staff to take it forward in the life cycle.

This was how they managed innovation risk. It seems to me to be a lot easier than trying to change their passion-for-winning culture.

Sunday, June 28, 2009

Business models: a question of honour?

There's a story in Purple Cow that caught my eye about two restaurants. The first, Brock's Restaurant in Stamford, Connecticut, has a sign up above their salad bar: "Sorry - No Sharing". There is an explanation that this rule is necessary so that the restaurant can continue to provide good value. In other words "please help us protect our profit margins".

Compare this to the wine policy at a restaurant called Frontiere. The owner puts an open bottle of wine on every table. At the end of the meal you tell the waiter how many glasses you have drunk. An honour system.

The brilliance of this idea is that two glasses of wine pay for the whole bottle at wholesale price.

Isn't the salad bar just like the media business? In this case the sign might read "Don't share music" or "Don't photocopy books". The underlying message is the same: "please help us protect our profit margins".

What if we were more like Frontiere? What if there was an honour system?

In journal publishing that is exactly what does happen. Journal articles have no DRM. Fair use allows for copying and sharing of articles. Publishers trust librarians to honour fair use. Librarian respect that trust. Articles are shared widely and everyone benefits.

Freeware has a similar business model. Download for free but make a contribution if you use it and like it. You might find it surprising but many people do.

So what might an honour system for books look like?

Suppose we make books available online by chapter. The student pays for one chapter but has access to all. If they download more we ask them to tell us and pay.

Just as in the wine example, pricing will be key. I'm certain that the low price for a single chapter will attract a larger number of customers. I doubt it would be enough to protect the margins. The success of the model will likely depend on how many customers pay for a second or third chapter. The other thing to remember is that the printed book will of course still sell although likely in smaller numbers. And they would be other derivative versions, such as the complete eBook, or the eBook as part of a collection or library, and for a textbook an online course.

I'm sure this sounds like a crazy idea to you. Maybe it wouldn't work. But maybe it just might. In any event, I would love to see someone experiment with new business models like this for books. I think everyone would benefit.

Thursday, June 25, 2009

Do you love reading or do you love books?

Author Ann Kirschner set out to answer this question by reading the Charles Dickens classic “Little Dorrit” four ways: as a paperback, as an audio book, on her Kindle and on her iPhone. She spoke about her experience on NPR's On The Media last week. You can listen to the interview below or read the transcript here.




Ann's conclusion was that she loves reading and that each of the formats she chose offered something different and was useful in different ways. She loved the familiar paperback, the nostalgic feel of it in her hands. She used the audiobook in the New York subway. The Kindle was good but she didn't like the screen going black between pages.

Then she used the iPhone:

"....the iPhone, which seems, on the face of it, to offer the least enjoyable experience because the screen is so small. And yet.... the iPhone was the revelation to me. The screen is brighter, crisper. You can change pages instantaneously. But the most important thing is that the iPhone is always with you, or at least always with me."

There are two things that I learn from this. The first is the same point that Stephen Fry made recently "books are no more threatened by ebooks, than stairs were by elevators". If you love reading then celebrate that we have so many great ways to read now.

The second is Ann's experience with the iPhone. The fact that it is always with her compensated for an inferior user experience. Easy of use and instant availability trumped quality.

I'm afraid that many publishers are still convinced that their customers love books, after all they are still buying them right? I think that many customers are discovering, just as Ann did, that their joy of reading is stronger than their love of books. I call this the ebook event horizon.

Publishers should be falling over themselves to provide their books in every possible emerging format. They need to worry less about the things that are important for books (like quality) and care more about ease of use and instant availability on any device. What's to fear? After all, we still use stairs don't we?

Wednesday, June 24, 2009

Book Review: Purple Cow by Seth Godin

Something remarkable is worth talking about. Worth noticing. Exceptional. New. Interesting. This is a remarkable book for all those reasons. It is written with marketeers in mind but I think the insights are relevant to anyone involved in innovative product development. The book is a passionate plea to us all to think differently about our markets. And to think about marketing from the moment we begin to design a product or service. In fact marketing is design

The underlying theme of Seth's book is that the old ways of marketing are dead. He does a great job of explaining why. For example: "The sad truth is that whatever you make, most people cannot buy your product. Either they don't have the money, or they don't have the time, or they don't want it. The world has changed, there are far more choices and less and less time to sort them out."

Seth continues: "No-one is going to eagerly adapt to your product. The vast majority of customers are happy. Sold on what they have got. Not looking for a replacement, and anyway they don't like adapting to anything new."

These are humbling insights. It is all too easy to become caught up in your own success story when you are building products. You need the positive momentum to keep you going, to keep you excited. So it is a shock to discover that at the launch that the world isn't waiting eagerly for the stuff you have worked so hard on.

So what can you do?

Be remarkable. Create something that is worth noticing and that people talk about. Ideas that spread are more likely to succeed than those that don't. It is not about gimmicks. Not about creating remarkable blurb. Not about making a product attractive after it has been designed. It is about designing the product to be remarkable from the start. Products that are worth talking about will get talked about.

The book is stuffed full of examples of remarkable products and services to help you understand what Seth means by this. My favourites are: the Dutch Boy paint cans that are designed with easy opening lids and carrying handles to make the painting process easier. Tracey the publicist who chose to focus on the narrowest possible niche (plastic surgeons) and to become the world's best publicist in that niche. The Four Seasons in Manhattan that knows that personal attention can make people feel special.

This is not a perfect book. As always Seth can be by turns insightful and annoying. He has a self-confessed tendency to hyperbole and some of the case studies are a bit flaky. Overall though this is a great book and I thoroughly recommend it.

Oh yes and the title of the book? Well it's simple. If you were driving along in the countryside and you saw a cow that was purple instead of the usual brown or black and white ones, that would be remarkable wouldn't it? You'd probably tell someone about it when you arrived home: "hey you'll never guess what I saw - a purple cow!" That's what customers say about remarkable products.


#Twitternovels version of this review: Don't be boring. Playing safe is risky. You must be remarkable - you must be a Purple Cow

Purple Cow by Seth Godin

A serendipity button

I always dreamed of what I called a Serendipity Button - a kind of sliding scale you could apply to (say) search results. A low serendipity setting would show you the results you were expecting, but a high serendipity setting would reveal apparently unrelated, but curiously intriguing, results that you were not expecting.

Wouldn't that be cool?