Between them, they'd racked up over $five one thousand thousand in winnings on the television quiz show Jeopardy. They were the best players the show had produced over its decades-long lifetime: Ken Jennings had the longest unbeaten run at 74 winning appearances, while Brad Rutter had earned the biggest prize pot with a total of $3.25 meg.

Rutter and Jennings were Jeopardy-winning machines. And in early on 2011, they agreed to an exhibition match against an opponent who'd never even stood behind a Jeopardy podium earlier.

Only this Jeopardy unknown had spent years preparing to take on the two giants in the $1m match, playing 100 games against past winners in an endeavour to amend his chances of winning.

That opponent didn't grinning, offered all his answers in the same emotionless tone, and wouldn't sit in the same room as his fellow contestants. He had to work as well hard at keeping his absurd and was so noisy, it was idea he was too confusing to have the podium in person. He was kept in a back room, his answers piped into the studio.

Y'all wouldn't know by looking at him what he was thinking – perchance you'd spot but a tinge of color when he was puzzling over a specially hard question.

The contender started out with a run of winning answers – he knew his Beatles songs, Olympic history, literary criminals. Sure, he wasn't too familiar with his Harry Potter, but he stretched out a lead all the same, leaving Rutter and Jennings trailing thousands of dollars behind.

But questions on decades tripped him upwardly, and Rutter fought back, piling up plenty cash to unsettle anyone who'd bet on the consequence of the match. Past the end of the first of the special exhibition friction match shows, yous'd take been difficult pushed to work out which was safest with your money.

But then Double Jeopardy happened. The upstart powered through the large questions, winning fifty-fifty with guesses he was far from convinced about, and placing odd bets that came expert.

By the stop of the second episode, the unknown had $25,000 more his closest opponent, Rutter. Rutter and Jennings looked increasingly uncomfortable as it begun to look like they'd get a pasting from the new male child, bobbing in frustration equally their opponent buzzed in before them time and time again.

Jennings managed a late fightback in the third episode, just the new opponent gradually clawed back enough money to brand information technology a close run.

All 3 correctly answered the last question 'William Wilkinson's 'An account of the principalities of Wallachia and Moldavia' inspired this writer'due south most famous novel' with 'who is Bram Stoker?' but Jennings appended his response with: "I for one welcome our new reckoner overlords".

He, and Rutter, had lost to Watson – a room-sized beast of a machine made by IBM and named after the company's founder Thomas J Watson.

Watson, consisting of ten racks of ten Power 750 servers, had to be kept apart from the human contestants because of the roar of its cooling system and was represented at the podium by an avatar of IBM's Smarter Planet logo, whose moving lines would go green when Watson had cracked a thorny problem, orange when the answer was incorrect.

While Watson had the questions delivered in text rather than by listening to the quizmaster, he played the game like his man counterparts: puzzle over the question, buzz in, give the answer that's most likely to be right, tot up some prize money.

And Watson was right a lot of the time. He won the game with $77,147 leaving Rutter and Jennings in the grit with $21,600 and $24,000 respectively.

Information technology turned out that the real Jeopardy winning machine was, well, a motorcar.

Three nights, two people, ane machine and $i million: the victory of IBM's Watson over two human contestants on Jeopardy was the starting time, and maybe only, time the car impressed itself on the general public's consciousness.

Simply fifty-fifty before Watson secured its now-famous win, IBM was working on how to plow the cute quiz show-dominating machine into a serious business contender.

Watson began life five years before its TV appearance, when IBM Research execs were searching for the next "Grand Challenge" for the company. IBM periodically runs these One thousand Challenges, selected projects that pit human against motorcar, have international appeal, are easy to grasp and attract people into working in science and maths fields. Forth with Watson, the Thousand Challenges have spawned Deep Blueish, the automobile that famously crush 1000 principal Garry Kasparov at chess, and the Blueish Gene supercomputer.

In the mid-2000s, IBM was on the lookout for its next 1000 Challenge. Paul Horn, and then director of IBM Research, was in favour of trying to develop a machine that could win the Turing Test, a manner to measure auto intelligence by having a system attempt to fool a human being into thinking that they're having a conversation with another person.

Every bit challenging as passing the Turing Test is – no motorcar has however washed it – it was felt that it wouldn't perhaps light upwardly the public's imagination every bit other projects had. But were there any related challenges that could still bring those elements of competing against humans and understanding human speech together?

"Chirapsia a human in Jeopardy is a pace in that direction – the questions are complicated and nuanced, and information technology takes a unique type of computer to take a chance of beating a human by answering those type of questions. I was running the research segmentation and I was bugging people in the organisation, in particular [onetime EVP in IBM'south software group] Charles Lickel," Horn said.

Lickel was inspired to take on the claiming of building a Jeopardy-winning figurer afterwards having dinner with his squad. "Nosotros were at a steak firm in Fishtail, New York. In the centre of dinner, all of a sudden the entire eatery cleared out to the bar – I turned to my squad and asked 'what's going on?'. It was very odd. I hadn't really been post-obit Jeopardy, simply information technology turned out it was when Ken Jennings was having his long winning streak, and everyone wanted to find out if he would win again that night, and they'd gone to the bar to see," Lickel said. Jennings won once once again that night, and notwithstanding holds the longest unbeaten run on Jeopardy with 74 appearances undefeated.

The idea of a quiz champion machine didn't immediately win his team around, with many of Lickel's best staff saying they didn't believe a machine could compete with, let solitary beat, mankind and blood champions.

"They initially said no, information technology's a lightheaded project to work on, it'south besides gimmicky, it'south not a real computer science test, and we probably tin't practise it anyway," said Horn.

Nonetheless, a squad sufficiently adventuresome to take on the challenge of building a Jeopardy winner was found.

It was still a small project and thoughts of commercialisation weren't uppermost in anyone's mind – Grand Challenges were sit-in projects, whose return for the visitor was more in the buzz they created than in a contribution to the bottom line. If commercialisation happened, neat – only for now, Watson was just a flake of a moonshot for IBM.

Due to the initial size of the attempt, it was funded from the inquiry group's everyday budget and didn't crave sign-off from Big Blue's college-ups, significant it could operate free of the commercial pressures of most projects.

Jeopardy'south quirk is that instead of the quizmaster setting questions and contestants providing the respond, the quizmaster provides the answers, known as 'clues' in Jeopardy-speak, to which contestants provide a question. Not only would the machine need to be able to produce questions for the possible clues that might come its mode on Jeopardy, it would demand to be able to first pull apart Jeopardy's tricksy clues – piece of work out what was beingness asked – before it could even provide the right response.

For that, IBM adult DeepQA, a massively parallel software architecture that examined tongue content in both the clues set by Jeopardy and in Watson's ain stored data, forth with looking into the structured information information technology holds. The component-based system, built on a serial of pluggable components for searching and weighting information, took about 20 researchers 3 years to reach a level where it could tackle a quiz show performance and come up out looking better than its human opponents.

First up, DeepQA works out what the question is asking, then works out some possible answers based on the data it has to manus, creating a thread for each. Every thread uses hundreds of algorithms to study the bear witness, looking at factors including what the information says, what type of data information technology is, its reliability, and how likely it is to be relevant, then creating an individual weighting based on what Watson has previously learned about how likely they are to be right. It then generated a ranked listing of answers, with evidence for each of its options.

The data that DeepQA would eventually be able to query for Jeopardy was 200 one thousand thousand pages of data, from a variety of sources. All the information had to be locally stored – Watson wasn't allowed to connect to the Internet during the quiz – and understood, queried and candy at a fair clip: in a Jeopardy's case, Watson had to spit out an answers in a matter of seconds to brand sure information technology was commencement to the buzzer.

"When I left IBM in end of 2007, Watson was an embryonic project," said Horn. "It had iii people in Charles Lickel's area that got the information from the old Jeopardy programmes and were starting to railroad train the machine. It could barely beat a five year sometime at that time. The projection was 'god knows how long it would have to beat an developed, allow alone a thousand champion'. Then over fourth dimension when it looked like they started to accept a adventure, Dave under the leadership of John Kelly grew the project into something substantial," said Horn.

While at that place'south nevertheless debate over exactly when the idea of making Watson pay its way finally took shape at IBM, when Watson took to the stage for its Jeopardy-winning performance, the show featured IBM execs talking about possible uses for the organisation in healthcare, and moves to establish a Watson concern unit began not long subsequently the Jeopardy bear witness aired.

IBM's then-CEO Sam Palmisano and its current CEO Ginni Rometty, under whose remit Watson barbarous at the time, began discussions in the weeks subsequently the win, and the project was moved from under the wing of IBM Enquiry and into the IBM Software group.

In August of 2011, the Watson business unit proper came into being, headed up by Manoj Saxena, who'd joined IBM some years before when the company he worked for, Webify, was acquired past IBM.

Saxena was the unit's employee number ane. Within three months, he had been joined by 107 new Watson staffers, mostly technologists in the fields of natural language processing and machine learning.

Healthcare had already been suggested as the first industry Watson should target for commercial offerings, but there were no plans to confine it just to medicine. Any information-intensive manufacture was fair game, anywhere were there were huge volumes of unstructured and semi-structured data that Watson could ingest, understand and process quicker than its man counterparts. Healthcare might exist a starting betoken, simply banking, insurance, and telecoms were all in the firing line.

Just how do you lot plough a quiz show winner into something more business-like? Get-go job for the Watson squad was to get to grips with the motorcar they'd inherited from IBM Research, sympathise the 41 separate subsystems that went into Watson, and work out what needed to be fixed up before Watson could put on its accommodate and tie.

In the Watson unit's first twelvemonth, the system got sped up and slimmed down. "We serialised the threads and how the software worked and drove up the functioning," Saxena said. "The system today compared to the Jeopardy system is approximately 240 per centum faster and it is i-sixteenth the size. The system that was the size of a principal bedroom will now run in a arrangement the size of the vegetable drawer in your double-drawer refrigerator."

Another way of looking at it: a single Ability 750 server, measuring nine inches loftier, 18 inches wide and 36 inches deep, and weighing in at effectually 100 pounds. Having got the system to a more manageable size for businesses, it set about finding customers to have it on.

IBM had healthcare pegged as its first vertical for Watson from the time of the Jeopardy win. Yet, while Jeopardy Watson and healthcare Watson share a mutual heritage, they're singled-out entities: IBM forked the Watson code for its commercial incarnation.

Jeopardy Watson had ane task – get an reply, understand it, and discover the question that went with it. Information technology was a single user system – had three quizmasters put three answers to it, it would have thrown the car into a spin. Watson had to be retooled for a scenario where tens, hundreds, still many, clinicians would be asking questions at once, and not single questions either – complex conversation with several related queries 1 after the other, all asked in non-standard formats. And, of class, there was the English language itself with all its messy complication.

"In that location were fundamental areas of innovation that had to be done to go beyond Jeopardy – there was a tremendous amount of pre-processing, post-processing and tooling that nosotros have added around the core engines," added Saxena. "It'southward the equivalent of getting a Ferrari engine then trying to build a whole race car effectually information technology. What nosotros inherited was the cadre engine, and we said 'Okay, let's build a new affair that does all sort of things the original Jeopardy system wasn't required to do'."

To get Watson from Jeopardy to oncology, there were three processes that the Watson squad went through: content accommodation, training adaptation, and functional accommodation – or, to put it another style, feeding it medical information and having it weighted appropriately; testing it out with some practise questions; and then making whatever technical adjustments needed – tweaking taxonomies, for instance.

The content adaptation for healthcare followed the same path equally getting Watson upward to speed for the quiz bear witness: feed information technology information, show it what right looks like, then let it approximate what right looks like and correct it if it'due south wrong. In Jeopardy, that meant feeding it with thousands of question and answer pairs from the bear witness, so demonstrating what a correct response looked like. And so it was given just the answers, and asked to come up with the questions. When it went wrong, it was corrected. Through machine learning, it would begin to get a handle on this respond-question thing, and modify its algorithms accordingly.

"Information technology would be fed many cases where the history was known and proper treatment was known, then, analogous to preparation for Jeopardy, information technology's been given cases so it suggests therapies," Kohn said.

Some data came from what IBM describes every bit a Jeopardy-similar game called Physician'south Dilemma, whose questions include 'the syndrome characterized by joint pain, abdominal hurting, palpable purpura, and a nephritic sediment?'. (The answer, of course, is Henoch-Schönlein purpura.)

The training, says Kohn, "is an ongoing process, and Watson is chop-chop improving its power to make reasonable recommendations the oncologists retrieve are helpful."

By 2012, there were two healthcare organisations that had started piloting Watson.

Wellpoint, one of the Us biggest insurers, was ane of the pair of companies that helped define the application of Watson in wellness. The other was Memorial Sloan-Kettering Cancer Center (MSKCC), an organisation IBM already had a relationship with and which is located non far from both IBM'due south own Armonk headquarters and the research laboratories in York Heights, New York that still firm the first Watson.

And it was this relationship that helped spur Watson's first commercial movement into working in the field of cancer therapies. While using Watson as a diagnosis tool might exist its near obvious application in healthcare, using it to assist in choosing the right therapy for a cancer patient made even more sense. MSKCC was a 3rd referral center – by the time patients arrived, they already had their diagnosis.

And so Watson was destined first to be an oncologist's assistant, digesting reams of data – MSKCC'southward own, medical journals, articles, patients notes and more – along with patients' preferences to come with suggestions for handling options. Each would be weighted accordingly, depending on how relevant Watson calculated they were.

Different its Jeopardy counterpart, healthcare Watson as well has the ability to go online – non all its information has to be stored. And while Watson had two meg pages of medical data from 600,000 sources to eat, it could still brand use of the general cognition garnered for Jeopardy – details from Wikipedia, for example. (What information technology doesn't use, however, is the Urban Dictionary. Fed into Watson late final year, information technology was reportedly removed afterward answering a researcher'south query with the give-and-take "bullshit". "We did find some interesting responses, and then we had to shut that downwards," Saxena said diplomatically. "That is not to be repeated, because it would be seen as very improper in certain cases, and nosotros had to teach Watson the right business concern behaviour.")

As such, the sources are at present medical publications similar Nature and the British Medical Journal. And in that location are other prophylactic nets also.

"In the instruction stage, a dr. – a cancer care specialist in this case – sits downwardly and asks questions of Watson and corrects the machine learning. The doctor and a data scientist are sitting adjacent to each other, correcting Watson. Spurious material, or conflicted material or something from a pharmaceutical company that the doctor feels may be biased – that is caught during the preparation cycle," added Saxena.

WellPoint and MSKCC used Watson as the basis for systems that could read and empathise volumes of medical literature and other information – patients' treatment and family histories, for example, also equally clinical trials and articles in medical journals – to help oncologists by recommending courses of treatment.

A year of working with both organisations has produced commercial products: Interactive Intendance Insights for Oncology, and the WellPoint Interactive Intendance Guide and Interactive Care Reviewer. Interactive Care Insights for Oncology provides suggestions for treatment plans for lung cancer patients, while New WellPoint Interactive Care Guide and Interactive Care Reviewer reviews clinicians' suggested treatments against their patients' plans and is expected to be in use at 1,600 healthcare providers this twelvemonth.

Watson has bigger ambitions than a clinician'south assistant, however. Its medical noesis is around that of a first year medical student, co-ordinate to IBM, and the company hopes to accept Watson pass the general medical licensing board exams in the not too distant time to come.

"Our work today is in the very early stages around practice of medicine, around chronic care diseases. We're starting with cancer and we will soon add diabetes, cardiology, mental wellness, other chronic diseases. And then our piece of work is on the payment side, where nosotros are streamlining the authority and approval process between hospitals, clinics and insurance companies," Saxena said.

The ultimate aim for Watson is to be an aid to diagnosis – rather than just suggesting treatments for cancer, every bit it does today, it could aid doctors in identifying the diseases that bring people to the clinics in the beginning place.

Before then, there is piece of work to exist done. While big data vendors often trumpet the growth of unstructured data and the abandoning of relational databases, for Watson, information technology's these older sources of data that present more of a trouble.

"Watson works specifically with natural language – free text or text-like information – and that's approximately 80 pct of the huge volumes of healthcare information bachelor to us," said Kohn. "Then there's the 20 pct that is structured information – basically, numerical data – or images, like MRIs, CAT scans, and then on. Watson does not process structured data directly and it doesn't interpret images. It can interpret the report attached to an paradigm, merely not the image itself."

In add-on, IBM is working on creating a broader healthcare offering that will accept information technology beyond its oncology roots.

"Even though Watson is working with these two organisations, what the designers and computer scientists are focusing on [is] that any they develop is generalisable, it's not just niche for cancer therapy and especially for the several cancers we're working with. We're using it as a learning procedure to create algorithms and methodologies that would be readily generalisable to any area of healthcare. They don't have to have to say, correct, nosotros have oncology under control, now permit's outset once more with family practice or cardiology," Kohn said.

Watson has also already found some interest in banking. Citi is using Watson to amend customer experience with the bank and create new services. It's piece of cake to come across how Watson could be put to use, say, deciding whether a borderline-risk business customer is likely to repay the loan they've applied for, or used to pick out cases of fraud or identity theft before customers may exist aware they're happening.

Citi is however early in its Watson experiments. A spokeswoman said the company is currently just "exploring use cases".

From here on in, rather than being standalone products, the next Watson offerings to hit the marketplace will be embedded into products in the IBM Smarter Planet product line. They're expected to announced in the 2d half of the yr.

The first such Smarter Planet product appeared in May: IBM Appointment Advisor. The thought behind the Engagement Counselor, aimed at contact centres, is that customer service agents tin query their employers' databases and other information sources using natural language while they're conducting helpline conversations with their clients. 1 of the companies testing out the service is Australia's ANZ bank, where information technology will be assisting telephone call center staff with making financial services recommendations to people who ring up.

Watson could presumably i day scour available bear witness for the best fourth dimension to observe someone able to talk and decide the communication channel about likely to generate a positive response, or pore over social media for disgruntled customers and provide answers to their problems in tongue.

In that location are also plans to change how Watson'southward delivered, also. Instead of simply interacting with it via a telephone call centre worker, customers volition soon be able to get to grips with the Engagement Advisor. Rather than accept some telephone call centre amanuensis read out Watson generated information to a customer with, say, a fault with their new washing machine or a stock-trader wanting advice on updating their portfolio, the consumer and trader could just quiz Watson directly from their phone or tablet, by typing their query straight into a business' app. Apps with Watson under the hood should be out in the latter half of this year, according to Forbes.

IBM execs take also previously suggested that Watson could cease up a supercharged version of Siri, where people volition be able to speak direct into their phone and pose a complex question for Watson to answer – a farmer holding up his smartphone to take video of his fields, and asking Watson when to establish corn, for example.

IBM is keen to spell out the differences between Watson and Siri. "Watson knows what it knows – and past listening, learning and using human-like thinking capabilities uncovers insights from Big Data, Watson likewise quickly ascertains what it doesn't know. Siri, on the other manus, merely looks for keywords to search the web for lists of options that it chooses one from," the company says. But, the comparing holds: Watson could certainly have a future as your infinitely knowledgeable personal assistant.

While calculation phonation-recognition capabilities to Watson should be no peachy shakes for IBM given its existing partnerships, such a move would require Watson to be able to recognise images (something IBM'south already working on) that would require Watson to query all sorts of sources of data including newspapers, books, photos, repositories of data that accept been made publicly available, social media and the internet at large. That Watson should have on such a function in the coming years, especially if the processing goes on in an IBM datacentre and not on the mobile itself, as yous would expect, is certainly within the realms of the possible.

Every bit IBM seeks to embed Watson's capabilities into more and more than products, how far does the company retrieve Watson will spread in the coming years? It will only say gnomically, "as we go along to scale our capabilities, nosotros intend to brand Watson available as a set of services in many industries." Want a better reply? Better ask Watson.