Greenteeth DP logo

Greenteth Digital Publishing

Welcome to Greenteeth Digital Publishing

Many years ago low grade folk demon Jenny Greenteeth found her natural habitat was shrinking due to modern agricultural methods. It is a fallacy that folk demons are stupid creatures made of fear and superstition, they are actually highly intelligent and Jenny discovered a new habitat in electricity, which is as fluid as water though not as substantial. Living in the electricity system she could do mischief but had no way of making herself known as the originator of many pranks and glitches. Then came the internet ...
Jenny has witnessed many investment bubbles but the latest, the artificial Intelligence bubble looks to have burst before it was even half inflated ...


Contact us: - :: Privacy policy: We don't track you or gather personal info, we don't use cookies. End of ...

Has The Artificial Intelligence Bubble Already Burst

Ian R Thorpe

15 January, 2019

The rapid progress of Digital technology, exponential growth of computer processing power and storage capacity, the ubiquity of the internet and the masive hype of all innovation relating to it have already blown more that their fair share of investment bubbles.

However there's an old song, in the past sung by vaudeville artists in British Music Halls and latterly by the faithful supporters of certain under - achieving football (oh go on, soccer if you must) teams:

I'm forever blowing bubbles, pretty bubbles in the air, They fly so high, reach to the sky, Then like my dreams they fade and die ...

(YouTube video of West Ham United supporters singing I'm forever blowing bubbles)

Yes, those investment bubbles, like the bubbles children blow with soapy water, have not lasted long or in reality flown very high. The first was the infamous dotcom bubble when, between 1997 and 2001, misplace confidence in technology led investors to bet heaviliy on unproved business models and startups with no potential revenue stream identified and no product to sell, just an idea and a piece of software. Billionaires were created overnight and broken just as quickly.

A few years later, after smaller bubbles blown with hot air surounding various startups led to their getting massive free publicity from mainstream media ahead of flotation on the stock markets, selling oodles of shares, thus making billionaires of the founders, before being quietly forgotten. Eventually, with the dotcom burst forgotten because investors memories are short and its easy to be optimistic when you are playing with other people's money, social media became the next bubble. There was only ever going to be one winner, the one that had a cosy relationship with the US administration at the time it was becoming dominant, and the cooperation of US national security agencies, which gave it carte blanche to flout laws and violate users right to privacy.

The latest is the bubble is in the stocks of Artificial Intelligence start - ups as A.I. like so many other great technical advances that were going to bring such great benefits to humanity and change the way we all live, is already, while still in its infancy, failing to live up to the hype.

In the early 1960s at California's Stanford university, John McCarthy founded a research centre with the goal of investigating the possibility of creating human like qualities of intelligence in a machine. He dubbed this new area of research Artificial Intelligence.

The concept of developing machines that could think like humans was not new. Interest in the field grew quickly as members of the academic community, whose grip on reality has always been tenuous at best, grabbed the idea of machines surpassing humans in intellectual and technical ability was grasped as eagerly as the idea of flying to distant galaxies and finding planets with life supporting eco - systems to colonise. Both are as likely to happen in the near future as an arthritic ninety year old winning the Olympic Games one hundred meters.

The Stanford research centre was fortuitously set up at the time computers were being programmed sufficiently well they could beat humans at chess. Due to the cold war arms race huge amounts of government money being pumped into developing more advanced computers systems at the height of the Cold War, developers were making rapid progress in other areas such as algebra and language translation.field was grew quickly in the academic community whose grip on reality has always been tenuous at best. It was at that time computer programs were being developed that could beat humans at chess, and thanks to huge amounts of government money being pumped into developing more advanced computers systems at the height of the Cold War, developers were making rapid progress in other areas such as algebra and language translation.

Artificial Intelligence had already been a goal of the academic community for some time when McCarthy set up his laboratory, telling the organisations that had funded him a fully intelligent machine could be built within a decade. That was now five and a half decades ago and things have not panned out as predicted. The reasons for this are simple, academics like McCarthy spend all their time in university studies with heads buried in books, consequently in spite of their wealth of techical knowledge they are short on life experience and have little understanding of how humans work as individuals and in communities.

Nine years after McCarthy’s rash promises, and after more millions of Pounds, Dollars, Deutschmarks, Francs (this was before the Euro, and Yen had been pumped into research around the world, the UK government realised it was no nearer seeing a return on its money than on the day McCarthy promised a fully intelligent machine in a decade. They government of the day commissioned the British mathematician Sir James Lighthill to assess whether Artificial Intelligence was a realistic proposition and report back.

Lighthill’s conclusion, published in 1973, was damning. “In no part of the field have the discoveries made so far produced the major impact that was then promised,” his report said. “Most workers in AI research and in related fields confess to a pronounced feeling of disappointment.”

Academics, displaying that absolute certainty about the anticipated results of unfinished projects unique to the scientific world, attacked Lighthill for his scepticism, but the report triggered a collapse in government funding, in the UK and elsewhere. It was seen as the catalyst for what became known as the first “A.I. winter”, a period of disillusionment and funding shortages in the field and a realisation that computers are very useful tools, not living entities.

John McCarthy, one of the early pioneers in AI research
John McCarthy, one of the early pioneers in AI research

More than 50 years after McCarthy’s bold predictions, and after innumerable claims that 'true artificial intelligence is just a few years away,' technologists are once again bubbling with optimism about artificial intelligence. Venture capital funding for AI startup companies doubled in 2017 to $12bn (£9.3bn), almost a 10th of the total investment in new businesses, according to accountants KPMG. In Europe, more than 1,000 new companies have attracted venture funding since 2012, 10 times more than fields such as blockchain or virtual reality, according to the tech investor Atomico.

Look around the internet and you will find various memes stating that 99% or 90% of technology startups fail. You will also find many angry (but seldom well reasoned,) rebuttals from tech fanboys saying these statistics are made up and have no basis in fact. So who do you believe? Best thing is to make up your own mind after looking at evidence from sources we hope would be reliable.

Research on the topic shows most sources only track U.S. startup business failure rates, from business schools to the Federal government (see below). One of the problems in interpreting results lies in definitions: for example, what is meant by failure, a business simply sruggling to break even or a business going broke having lost all the investment without seeing a return. This is the key question. And for the sake of comparison we must also be clear what is meant by success.

Simple enough criteria you may think, but no two studies seem to agree on these definitions.

Lat's say, for example, that success is measured by starting a company that achieves $1m in Revenue, then 95% of all startups fail. Many entrepreneurs would see achieving $1m in Revenue as a good measure of success. That means there are many small businesses between $0 and $1m that survive, but are bumping along without making a significant impact and without making much, if any income.

Here are results from several academic studies:

1. Growth/Size Failure Rates:

Source: Fortune article citing an Association for Corporate Growth study

2. Successful Investment Failure Rates:

Source: Harvard Business School research

3. Survival Failure Rates:

Source: SBA Analysis in their Small Business Facts report (

So that 90% failure rate is not far out according to those figures (I recommed looking at the source material of course, things are seldom what they first appear to be.)

Giants such as Google and Microsoft are trying to expand their activities and consolidate their future via A.I. Early in 2018, Google chief executive Sundar Pichai called the technology “one of the most important things that humanity is working on”, adding: “It’s more profound than, I don’t know, electricity or fire.”

That, of course, is a perfect example of the hype referred to above, without electricity there could be no computers, without fire we would still be running around naked and throwing turds at any creature that dares to invade our space, as our nearest relatives in the animal kingdom still do.

Driven perhaps by another meme, FOMO - the Fear Of Missing Out - corporate busineeses not previously associated with Information Technology world are trying to grab a ride on the bandwagon too. Analysis of calls for investment by US public companies last year revealed the term “artificial intelligence” was mentioned 791 times in the third quarter of 2017, often in connection with things that are really based on tried and tested technology that cannot even loosely be described as 'Intelligence'.

Significant breakthroughs are promised. Driverless cars are often predicted within a decade though numerous technical and safety problems remain to be overcome and costs are likely to be prohibitive for all but the wealthiest one per cent.

Rising global tensions are boosting government investment, particularly in China, as governments lok to weaponise digital technology and lovers of high drama speculate on the possibility of cyber attacks shutting down the power grid or transport networks.. Elsewhere, economists fret about widespread unemployment. Others, such as the late Stephen Hawking, have feared that the rise of robot weapons could eradicate humanity.

But another kind of pessimism is also gathering some momentum. What if instead of being hopelessly unprepared for the the robot invasion and the era of Artificial Intelligence, we have drastically overestimated amid all the excitement about sex robots and such, the disruption likely to be caused by recent developments? What if, instead of being on the cusp of one of the greatest breakthroughs in history, we are in a similar position to that of the Seventies when scientists told us "true Artificial Intelligence is just around the corner," at the very moment the bubble bursts?


“The whole idea of making machines intelligent has been a long goal of computer scientists and, as long as we’ve been following it, AI has gone through these waves,” says Ronald Schmelzer an analyst with Cognilytica, a consultancy firm focused on artificial intelligence. “A lot of the claims [from the Sixties and Seventies] sound very familiar today. It seems to be one of those recurring patterns.”

Indeed, many of the recent breakthroughs in AI have been along the same lines as the chess and language breakthroughs of the Fifties and Sixties, if far more advanced versions. Two years ago, Google’s AI subsidiary DeepMind beat the world champion at Go, an ancient Chinese board game that is many times more complicated than chess. In March, researchers at Microsoft claimed they had created the first machine that could beat humans when it came to translating Chinese to English. But playing chess or translating have strict rules the machine can follow in contrast to tasks like writing a short story or negotiating an obstacle course.

Google translate is an excellent software suite, when we lack knowledge of a language but know what the language is, it can give us a translation into our own language that enables us to understand what information is in a piece of text. The grammar and syntax will not be perfect while metaphor and other figures of speech are lost completely but the translation will suffice. But the classic recerse translation test shows there is no intelligence at work. If you speak another language comtetently besides your own, take a section of text and translate it. After checking to see the translation is reasonable, run it through the software again and translate it back into your own language. Don't be surprised if it returns a complete mess.

The current excitement about AI owes largely to two trends: the leap in number-crunching power that has been enabled by faster and more advanced processors and remote cloud computing systems, and an explosion in the amount of data available, from the billions of smartphone photos taken every day to the digitisation of records.

This combination, as well as the unprecedented budgets at the disposal of Silicon Valley’s giants, has led to what researchers have long seen as the holy grail for AI: machines that learn. While the idea of computer programs that can absorb information and use it to carry out a task, instead of having to be programmed, goes back decades, the technology has only recently caught up. But while it has proven adept at certain tasks, from superhuman prowess at video games to reliable voice recognition, some experts are becoming sceptical about machine learning’s wider potential.

“AI is a classic example of the technology hype curve,” says Rob Kniaz, a partner at the investment firm Hoxton Ventures. “Three or four years ago people said it was going to solve every problem. The hype has gone down but it’s still way overblown. In most applications it’s not going to put people out of work.”

Schmelzer says that funding for AI companies is “a little bit overheated”. “I can’t see it lasting,” he adds. “The sheer quantity of money is gigantic and in some ways ridiculous.”

Most AI sceptics point out that the breakthroughs that have been achieved so far are in relatively narrow fields, with clearly defined structures and rules, such as games. The rapid advancement in these areas has led to predictions that computers are ready to surpass humans at all sorts of tasks, from driving to medical diagnosis.

But transposing prowess in games to the real world is another task altogether, something that became clear with fatal consequences earlier this year. In March, a self-driving car being tested by Uber in Arizona failed to stop in front of Elaine Herzberg when the 49-year-old stepped out into the street.

She became the first person to be killed by a driverless vehicle, which was travelling at 38mph. The car’s systems had spotted Herzberg six seconds before the crash, but had failed to take action. The incident was the most striking example yet that the grand promises made about AI just a few years ago were detached from reality. While driverless cars were once predicted to be widely available by 2020, many experts now believe them to be decades away. Driverless Uber A driverless Uber vehicle was involved in the death of a woman in March Credit: Reuters

Driverless cars have not been the only setback. AI’s potential to revolutionise healthcare has been widely touted, and Theresa May said this year that AI would be a “new weapon” in fighting cancer.

The reality, so far at least, has been less promising. IBM’s Watson technology, an AI system that has promised major breakthroughs in diagnosing cancer, has been accused of repeatedly misdiagnosing conditions. Shortly after the Uber crash, the AI researcher Filip Piekniewski wrote that a new AI winter is “well on its way”, arguing that breakthroughs in machine learning had slowed down.

Schmelzer says that companies have stopped placing blind faith in AI, pointing out comparisons with the dotcom bubble when businesses demanded an internet presence even when it was unnecessary. “It was technology for technology’s sake and there was a lot of wasted money. I think we started to see that [with AI].”

Kniaz, of Hoxton Ventures, agrees that the bubble has started to deflate, saying that while companies would often attract funding merely for mentioning artificial intelligence in investor presentations, they are now having to prove that it works.

However, he says that even the narrow progress made in recent years has plenty of real-world uses, even if it is a long way from matching human intelligence. “We’re now at the point where it’s a little more sane,” Kniaz says. “It’s reaching a nice stable point now. You’re seeing it applied to better problems.”

Global collapse
The global corporatocracy
The globalisation of serfdom
Global trade crash
Is Europe Waking Up To The Threat Of Globalism Posed by Secret Trade Treaties
Living within the conspiracy
The New World Order (catalogue)
New World Order omnibus
UN Migration Pact: Which Governments Are Prepared To Sell Out Their People? New World Order's cashless society
DC Leaks Expose George Soros Manipulating Elections
New world order is an old idea
Free trade conspiracy
After Triggering Mass Migration Crisis Germany Is Now Bribing Refugees To Leave

[ Greenteeth UK ] ... [ Daily Stirrer.shtml ]...[Little Nicky Machiavelli] ... [ Our Page on on Substack ]... [ Ian's Authorsden Pages ]... [ It's Bollocks My Dears, All Bollocks ] ... [ Minds ] ... [Scribd]... [ Boggart Abroad] ... [ Grenteeth Bites ] ... [ Latest Posts ] ... [Ian Thorpe at Flickr ] ... [Latest Posts] ... [ Tumblr ] ... [ Blog Bulletin ]

https://www.quora.com/profile/Ian-Thorpe-6 [ Ian, Greenteeth editor, at Facebook ]

RELATED POSTS:
Elsewhere:

[ Greenteeth UK ] ... [ Daily Stirrer.shtml ]...[Little Nicky Machiavelli] ... [ Our Page on on Substack ]... [ Ian's Authorsden Pages ]... [ It's Bollocks My Dears, All Bollocks ] ... [ Minds ] ... [Scribd]... [ Boggart Abroad] ... [ Grenteeth Bites ] ... [ Latest Posts ] ... [Ian Thorpe at Flickr ] ... [Latest Posts] ... [ Tumblr ] ... [ Blog Bulletin ]

https://www.quora.com/profile/Ian-Thorpe-6 [ Ian, Greenteeth editor, at Facebook ]