Neural networks for dummies or why they are a no-go for AI anyway

Neural networks for dummies or why they are a no-go for AI anyway

Google's AI winning the Go game paved an unbelievably wide track for bazillion of publications about powerful, unbeatable neural networks.
Claimed like second to none, AI days are next door, tomorrow no man is needed anymore — behold, technological singularity is there.

Well, to what extent is the buzz justified? Common sense and cybernetics are to be our guides.

—What's the difference between a rat and a hamster?
— Hamster's got much better PR!
  1. What neural networks are and what aren’t
  2. Cabbages and kings – competencies at closer look
  3. Useful application?
  4. How to create a neural network
  5. Now about profits
  6. Fundamentally limited applicability
  7. Summary

 

1. What neural networks are and what aren’t

In layman’s terms, neural networks are another set of various computer algorithms. Just like that.
In fact, you cannot distinguish neural networks from software products we are used to such as Tetris, Counter-Strike or even Microsoft Windows.

The names including words like "neurons" and "networks" in fact have nothing to do with physical reality. Likewise, beauty quarks of chromodynamics have nothing in common with Merlin and his staff.

Neural networks are different from what it seems by the name. They can be trained though it’s more like stereotyping or automatism.
They "think" but NOT the way we do. And never would. No matter how much computer power we apply ever.

By the way, you might wonder but neural networks are quite an old story. It began somewhere around 1960 with the first operating one which was Frank Rosenblatt‘s perceptron.

Today they are proclaimed as advantageous and most modern products especially for pattern recognition though even the first of neural networks was able of doing it — and it was more than 50 years ago.

The story with AI soon to come is akin — these talks are here and there since the said old days — AI and fusion reactor babies grey-headed almost the same. Or just a bit of different.

But now, with all those good old days gone forever, what – in fact – has changed?

Nothing, indeed. Except for hardware: we’ve got more powerful computers able to handle more complex neural networks. That’s all.

 

2. Cabbages and kings – competencies at closer look

Neural networks are indeed constrained. Actually they fall short of surpassing already human cerebellum, which is a part of Homo sapiens brain responsible for automatisms. Consciousness-related area has different locations.

In other words, neural network algorithms we can reproduce are of insect level far from reaching human-equal performance.

How come?

As Sir Roger Penrose stated famous Gödel's incompleteness theorem corollary is that algorithmic modeling is a failure trying to reproduce abstract thinking.

Abstract reasoning (Plato’s Theory of Ideas) is an immanent trait of true mind.

It might even define it.

Therefore, since any neural network is an algorithm asserting true (strong) AI is such network product is no more than marketing gibberish without any sound scientific grounds.

However, we live in a world full of diversities. Those preferring promo stories to math-determined reality are around us either. Their hopes should be investing in start-ups claiming ant can learn math analysis.

 

3. Useful application?

So called wide public was extremely impressed by notorious victory of Alpha Go, the Google’s neural network, over anthropoid champion. Mass media did a great job pumping that bubble.

Essentially, barbarian chieftains were to be likewise astonished with mechanical animals and singing birds presented to Byzantine emperors. In other words, those two technologies of different times do have one thing in common — related objectives.

At the end of the day, what is the real, practical use (unless we are talking about mass media impact) of Alpha Go’s ability to defeat anyone playing this ancient Chinese game?

One thing is crystal clear — no one of humans would ever play with it since the final result is predetermined. It means no more achievements, no more victories. You did it once? Cool. You’re done!

On the other hand, there was too much fuss from South Korea government announcing about 1 bln of investments in AI development project.

Rejoice, Google’s Danae! Your shower of gold is there, abundance of hard cash.

There is a big BUT — universal Alpha Go's algorithm as mass media named it is useless in practical sense: each and every neural network is extremely ad hoc.

That’s the fundamental challenge, enigmatic difference from natural mind.

The way neural networks can be applied is somewhat like our basic motor skills: being a perfect driver never helps you with fencing.

 

4. How to create a neural network

Essentially, to make a real neural network requires three stages, that is, programming/setup, "training" and application.

Even at the very first stage you are challenged. The stumbling stone is a practical consequence of Penrose works: no neural network can be created if no external device with abstract thinking capacities is in place.

So, it means a programmer, a full-fledged mind (of natural origin, so to say) should be involved first.

No matter how much you exercise in talks of self-trained neural networks — none of them is able to learn anything at all without human mind. Actually, it is similar to training your cerebellum, which is a no-go without "external" free will and desire, your conscious guidance.

Then once your neural network is properly set up it comes to the training. Which is, at the end of the day, multiply repeated one and the same algorithm with different incoming data.

Huge datasets produced due to such "training" or "learning" are, in fact, a database of matching value pairs such as input X — output Y.

The amount of calculations necessary is huge. That’s why you’d better stock up with a couple of supercomputers and considerable free time.

Finally, our graduate neural network at the third stage is ready for practical application the way it has been designed for.

 

5. Now about profits

What are those ways? Anybody knows?

We are not talking about mass media buzz to generate and enjoy AI development investments.

Technological revolution to unleash or forthcoming technological singularity — anyone can see it happening?

Where is the magic wand to tame Alpha Go’s infinite promises?

There is one Wikipedia article with similar name that might be of help: "When compared with Deep Blue or with Watson, AlphaGo's underlying algorithms are potentially more general-purpose, and may be evidence that the scientific community is making progress towards artificial general intelligence.". The quote crucial though ignominious points are "potentially" and "may be".

Need translation? Business-wise it means no one actually knows what the Alpha Go is good for. Otherwise, we may assure you, more definite language would be there without any doubts.

Even better, "as noted by entrepreneur Guy Suter, AlphaGo itself only knows how to play Go, and doesn't possess general purpose intelligence." (Ibid).

 

6. Fundamentally limited applicability

Recently we published an article showing real life business does NOT need any real artificial intelligence. As simple as that. Waivers can be thought of at the moment are entertainment industry and some human brain aids such as neural interfaces, search algorithms, etc.

The beauty of our world is that impossible equals useless. Is there something ringing a bell? Like "everything which is not forbidden is allowed"?

On the other hand, technology we saw advancing these couple of centuries has been never limped without any neural networks or even AI around.

We see conventional computers with conventional software, needing no media hype do wonderful job handling the whole diversity of objectives, taking more and more gradually, showing steadily improving performance.

Moreover, nothing around us signifies that it would ever stop or even lose momentum.

Similarly, there is no reason to believe that one of zillions of algorithms would turn a fairy godmother taking us all to a Brave New World out of a blue sky, that is, sixty years after invention.

Operational quantum computer would certainly change the beat or even a DJ since the message above was clearly related to algorithms for electronic computers. Though I don’t see much of promises for QCs and quantum computing to appear soon (yet).

 

7. Summary

  1. Neural networks are a subtype of computer algorithms with constrained usability with the latter to become even more limited.

  2. In the foreseeable future, the only way neural algorithm use remains justified is human-related communication aids.

  3. Fundamental reasons why they can’t beget artificial intelligence or technological singularity are similar to twice two is four rule.

  4. Investing into neural networks (apart from entertainment business and human communication interfaces) means only wasting your money.

April 20, 2017 by John Galt