Category Archives: Uncategorized

A Closer Look at "The Post-Quantum Mechanics of Conscious Artificial Intelligence"

As always, AIDL admin routinely look at whether certain post should stay to our forum. Our criterion has 3 pillars: relevancy, non-commercial and accurate. (Q13 of AIDL FAQ)

This time I look at "The Post-Quantum Mechanics of Conscious Artificial Intelligence",  the video was brought up by an AIDL member, and he recommend we started from the 40 mins mark.
So I listened through the video as recommended.

Indeed, the post is non-commercial for sure. And yes, it mentioned AGI from Roger Penrose. So it is relevant to AIDL. But is it accurate though? I'm afraid my lack of physics education background trip me. And I would judge that "I cannot decide" on the topic. Occasionally new science comes in a form no one understand yet. So calling something inaccurate without knowing is not appropriate.

As a result this post stays. But please keep on reading.

Saying so, I don't mind to give a *strong* response to the video. Due to the following 3 reasons:

1, According to Wikipedia, most of Dr. Jack Sarfatti's theory and work are not *peer-reviewed*. He has left academia from 1975. Most of his work is speculative. And most of them are self-published(!). There's no experimental proof on what he said. He was asked several times about his thought in the video. He just said "You will know that it's real". That's a sign that he doesn't really evidence.

2, Then there is the idea of "Post-Quantum Mechanics". What is it? The information we can get is really scanty.  Since I can only find a group which seems to dedicate to such study, as in here.  Since I can't quite decide if the study is valid.  I would say "I can't judge."  But I also couldn't find any other group which actively support such theory.  So may be we should call the theory at best "an interesting hypothesis".  And Sarfatti build his argument on the existence on "Post Quantum Computer". What is it?  Again I cannot quite find the answer on-line.

Also you should be aware that current quantum computer have limited capability.  D-Wave quantum computing is based on quantum annealing, with many disputed whether it is true quantume computing.  In any case, both "conventional" quantum computing and quantum annealing has nothing to do with Post-Quantum Computer. That again you should feel very suspicious.

3a, Can all these interesting theory be the mechanism of the brain or AGI? So in the video, Sarfatti mentioned brain/AGI for four times. His point are two, I would counter them right after, first is that if you believe in Penrose's theory that neurons is related to quantum entanglement, then his own theory-based on post quantum mechanics would be huge. But then once you listen to serious computational neuroscientists, they would be very cautious on whether quantum theory as the basis of neuronal exchange of information. There are many experimental evidence that neurons operate by electrical signal or chemical signal. But they are in a much bigger scale than quantum mechanics. So why would Penrose suggested that have make many learned people scratch their heads.

3b, Then there is the part about Turing machine. Sarfatti believes that because "post-quantum Computer" is so powerful so it must be the mechanism being used by the brain. So what's wrong with such arguments? So first thing: no one knows what "post quantum-computer", that I just mentioned in point 2. But then even if it is powerful, that doesn't mean the brain has to follow such mechanism. Same can be said with our current quantum computing technologies.

Finally, Sarfatti himself believes that it is a "leap of faith" to believe the consciousness is wave. I admire his compassion on speculating the world of science/human intelligence. Yet I also learn by reading Gardner's "Fads and Fallacies" that many pseudoscientists have charismatic personality.

So Members, Caveat Emptor.

Arthur

List of Bitcoin/Blockchain Resources

As AIDL grew, once in a while people would talk about blockchain would affect AI or deep learning.  Currently it is still a long shot, but blockchain by itself is a very interesting technology and it deserves our notice.

Here are some resources you may use to learn about blockchain.   Unlike "Top 5 List" for AIDL,  I don't really understand the technology too well.  But also unlike "List of Neuroscience MOOC", Greg Dubela did give me a lot of recommendations on what you should learned up.  Thus this post is also used as a resource post in "Blockchain Nation".

Introductory Videos:

  • (2 minute) This video: explaining the purpose of blockchain in 2 minutes, and the promise it makes.
  • This 6-part series from Dash School is a great introductory series on what Blockchain is, how it is governed, and several fundamental concepts.   Greg highly recommend the series.

MOOC

Blockchain is still a new development, so it's harder to find MOOC which can teach you the whole thing in entirety.  We found there are couple of exceptions:

Books

Visualizing Blockchain

Different Cryptos: (under construction)

Learning blockchain these days usually means you know different the characteristics of different coins.  Here are list of interesting ones.

  • Bitcoin
  • Litecoin
  • Ripple
  • Ethereum Classic
  • Ethereum
  • Dogecoin
  • Freicoin

As I said before, we are really no expert on the topic.  But as of 20170705, I am taking the Princeton class and I found it quite promising and get into the detail of how blockchain really works.

To be reviewed:

  • Someone also brought up University of Nicosia's Introductory MOOC on bitcoin.  I haven't see too much review yet.  So let's decide later then.
  • Khan Academy: https://www.khanacademy.org/economics-finance-domain/core-finance/money-and-banking/bitcoin/v/bitcoin-what-is-it
  • Berkeley "Dive Deep into Ethereum" https://docs.google.com/document/d/1ejYCWkHQIRInXB4VifoHevom8CWVJ69zFVfP4J2fjSU/edit

Notes on fasttext

Some gist about fasttext:

  • Basically 3 packages, wordvector, text classification and compression.
  • Text classifications is really comparable with other deep methods.  Another Web's wisdom is here.
  • Running the tasks are trivial for proficient unix users.  So I don't want to repeat them here.  The examples also run end-to-end and they are fast.
  • Unlike what I thought though, fasttext doesn't quite setup a deep-learning-based word-classification, but as I said, that's not the point.
  • Compression was known to be so good such that it can fit to be embedded devices.
  • Users also got granted patents to use the source code freely.  So good stuffs.

Some other nice resources one can follow:

http://debajyotidatta.github.io/nlp/deep/learning/word-embeddings/2016/09/28/fast-text-and-skip-gram/

http://sebastianruder.com/word-embeddings-1/index.html#continuousbagofwordscbow

http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf

http://textminingonline.com/fasttext-for-fast-sentiment-analysis

 

Some Quick Impression on MIT DL4SDC Class by Lex Friedman

Many of you might know about the MIT DL4SDC class by Lex Friedman. Recently I listen through the 5 videos and decide to write a "quick impression" post. I usually write these "impression posts" when I only gone through some parts of the class' material. So here you go:

* 6.S094, compared to Stanford cs231n or cs224d, is a more a short class which takes <6 hours to watch through all materials.

* ~40-50% of the class was spent basic material such backprop or Q-learning. Mostly because the class is short, the treatment of these topics feels incomplete. e.g. You might want to listen to Silver's class to understand systematically about RL and the place of Q-learning. And you might want to listen to Kaparty's at cs231n to know the basic of backprop. Then finish Hinton's or Socher's to completely grok it. But again, this is a short class, you really can't expect too much.

Actually, I like Friedman's stand on these standard algorithms: he asked audience tough questions on whether human brain ever behave as backprop or RL.

* The rest of the class is mostly on SDC, planning with RL, steering with all-in-one CNN. The part which is gem (Lecture 5) is Friedman's own research on driver's state. If you don't have too much time, I think that's the lecture you want to sit through.

* Now, my experience doesn't quite include the two very interesting homeworks, DeepTraffic or DeepTesla. Both I heard great stories from students. Unfortunately I never try to play with them.

That's what I have. Hope the review is useful for you. 🙂

My Social Network Policy

For years, my social networks don't follow a single theme.  For the most part I have varieties of interest and don't feel like pushing any news ....... until there is deep learning.   As Waikit and I started AIDL at FB with a newsletter and a youtube channel, I also start to see more traffic comes to thegrandjanitor as well. Of course, there are also more legit followers on Twitter.

In any case, here are couple of ways you can find me:
Facebook: I am private on Facebook. I don't PM, but you can always find me at the AIDL group.
LinkedIn:  On the other hand, I am very public on LinkedIn. So feel free to contact me at https://www.linkedin.com/in/arthchan2003/ .
Twitter: I am quite public on Twitter https://twitter.com/arthchan2003.
Plus: Not too active, but yeah I am there https://plus.google.com/u/0/+ArthurChan.

Talk to you~
Arthur

 

Good Old AI vs DNN - A question from AIDL Member

Redacted from this discussion at AIDL.

From Ardian Umam (shortened, rephrased):
"Now I'm taking AI course in my University using Peter Norvig and Stuart J. Russell textbook. In the same time, I'm learning DNN (Deep Neural Network) for visual recognition by watching Standford's Lecure on CNN (Convolutional Neural Networks) knowing how powerful a DNN to learn something from dataset. Whereas, on AI class, I'm learning about KB (Knowledge Base) including such as Logical Agent, First Order Logic that in short is kind of inferring "certain x" from KB, for example using "proportional resolution".

My question : "Is technique like what I learn in AI class I describe above good in solving real AI problem?" I'm still not get strong intuition about what I study in AI class in real AI problem."

Our exchange:

My answer: "We usually call "Is technique .... real AI problem?" GOAI (Good Old Artificial Intelligence). So your question is weather GOAI is still relevant.

Yes, it is. Let's take search as an example. More complicated systems usually have certain components in search. e.g. Many speech recognition these days still use Viterbi algorithm which is large-scaled search. NNMT type of technique still requires some kind of stack decoding. (Edit, was beam search, but I am not quite sure.)

More importantly, you can see many things as a search. e.g. optimization of a function, you can solve it by Calculus, but in practice, you actually use search algorithm to find the best solution. Of course, in real-life, you rarely implement beam search to optimization. But idea of search would give you better feeling many ML algorithms like."

AU: "Ah, I see. Thank you Arthur Chan for your reply. Yes, for search, it is. Many real problems now are still utilizing search approach to solve. As for "Knowledge, reasoning" (Chapter 3 in the Norvig book) for example using "proportional resolution" to do inference from KB (Knowledge Base), is it still relevant?"

My Answer: "I think the answer is it is and it is not. Here is a tl;dr answer:

It is not: because many practical systems these days are probabilistic. So it makes Part V of Norvig's book *feel* more relevant now. Most people in this forum are ML/DL fans. That's probably the first feeling you should have in these days.

But then, it is also relevant. In what sense? There are perhaps 3 reasons. First is it allows you to talk with people who learn A.I. from the last generation, because people in their 50-60s (aka, your boss) learn solving AI problem with logic. So if you want to talk with them, learning logic/knowledge type of system would help. Also in AI, no one knows what topic would revive. e.g. Fractal is now the least talked-about topic in our community now. But you never know what happen in the future 10-20 years. So keep ing breath is a good thing.

Then there is the part of how you think about search, in both Norvig and Russell's books, the first few search problem is to solve logic problem such as first-order logic, chess. While they are only used in fewer system, compare to search which requires probabilities, they are much easier to understand. e.g. You may heard of people in their teens write their first chess engine, but I heard no one write (good) speech recognizer or machine translator before grad school.

The final reason is perhaps more theoretical: many DL/ML system you use, yeah... .they are powerful, but not all of them are making decision human understand. So they are not *interpretable*. That's a big problem. So it is still a research problem of how to link these system to GOAI-type of work."