Thursday, November 2, 2017

DeepMind's AlphaGo Zero - An Astonishing Progress


This new development is exciting and scary at the same time. The progress of AI is moving at speed of light. We got one more step closer to General AI! The implication of this AI technology is profound!


Watch this video for more detailed technical explanation.


Thursday, June 22, 2017

Machine Learning in Biology - Bioinformatics Opportunity

Hi AI Club People! How are you?

It's 5:00 pm, Thursday. The DOE semester grades window was just closed, and I finally finished everything about grading just minutes ago! I knew that some of you might be very disappointed, and some of you might still wonder whether the AI Club will still continue. I am so sorry for "disappearing" from the club for so long. Since we started the F1 activity by accident, I had been dragged into a series of crisis modes. I finally feel a little bit relief until this moment! 

Yes. I am back and we will continue! A few day ago, I got an email from Dr. Weigang Qiu (a Bioinformatics Professor in Hunter College) who I had worked with in Bioinformatics STEM project last year. He has been very interested in applying Machine Learning to Biological applications. Though we didn't make great progress on our side (STEM course) in the past year, they seem to have made many progresses. This summer, they are going to look into Machine Learning more seriously. Here is the link he sent me. He is also looking for some motivated students to take on 1-2 summer projects they are starting. I will discuss with him further about how we can involve. At the same time, if any AIers or your friends interested in participating in Bioinformatics research, please response to this post.

I feel that we should meet all together before summer (maybe next Tuesday or Wednesday?). What do you think? Please comment.




Sunday, June 4, 2017

2017 STEM Show Case

Hi AIers,

You are invited to attend the 2017 STEM Showcase in which students will share their research and development results. It's a good opportunity to see some research projects being carry out in our school community.


Sunday, April 16, 2017

Talks from Prominent AI Researchers

Jürgen Schmidhuber is the main developer of Long Short-Term Memory" (LSTM) Recurrent Neural Network (RNN). His witty talks in AI always attract lots of audience. Here is his short talk on "When creative machines overtake man".



Andrew Ng is a Machine Learning computer scientist. He is the former chief scientist at Baidu Research (Chinese version of Google) in Silicon Valley. He is an adjunct professor at Stanford University. Ng is also the co-founder and chairman of Coursera, an online education platform with more than 24 million registered users and more than 2,000 courses. Here is Andrew Ng's introduction of Deep Learning and its impact: "AI: The New Electricity".

 
Demis Hassabis founded DeepMind in September 2010. This small startup company without any physical products was acquired by Google in 2014 at the price more than 500 millions dollars. In March 2016, a program powered by DeepMind called AlphaGo beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with 4-1 in a five-game match. Go is an ancient game that possesses more possibilities than the total number of atoms in the visible universe. Here is the DeepMind CEO Demis Hassabis' talk on "Artificial Intelligence (AI) invents new knowledge and teaches human new theories".


You are encouraged to watch any video and post your notes. Enjoy the rest of your spring break!

Natural Wonders

Hello AIers,

Haven't meet you all for quite a while. How was your spring break so far? I just came back from a road trip covering over 1,740 miles and driving through 8 states. Here are some photos I'd like to share with you:

Luray Caverns, VA: A huge underground cave system

Various speleothems in the caverns
 
A mirrored pool called Dream Lake
Natural Bridge, VA: A 215 feet high arch with a span of 90 feet
Thomas Jefferson purchased the land including the Natural Bridge from King George III of England in 1774.
Great Smoky Mountains National Park, TN
Observation tower at Clingmans Dome, TN (6,643 feet) is the highest point of the Great Smoky Mountains
Vistas form the observation tower at Clingmans Dome


Friday, March 10, 2017

Machine Learning Seminar I (03/13/2017)


 You are encouraged to invite friends to participate the seminar. 

                                               Speakers: 
    • Nearest Neighbor                   Egzon Mujaj 
    • Linear regression                   Emma Ruzicka 
    •  Decision Trees                      Bailey McElligott 
    • Support Vector Machines       Chin-Sung Lin 
    • Neural Network                     Colleen McGuckin 
    • Clustering                              Alyssa Koo & Julia Zee



Sunday, March 5, 2017

Notes: A DARPA Perspective on Artificial Intelligence - Emma

In the video, John Launchbury presents the abilities, limits, and future of technology through comparing its different forms, as it has advanced through the years. Nevertheless, it is important to note that all waves, as he described them, are still crucial in research, development, technology, and new inventions.

  1. First Wave (Handcrafted knowledge)
    1. Characterizing human knowledge into rules that could be implemented and interpreted by a computer
    2. Pro: Reasoning over narrowly defined domains (Reasoning)
    3. Cons: Poor handling of uncertainty, no learning capability, cannot interpret and predict the natural world/work with probability (Perceiving, Learning, Abstracting)
    4. Ex: game-playing programs, Turbotax, cybersecurity, etc. 
  2. Second Waves (Statistical Learning)
    1. Creating and training statistical models (ex.: NEURAL NETWORKS: manifold hypothesis, "spreadsheets on steroids"), which learn through data inputs
    2. Pros: Perceiving the natural word and learning from it, learning and adapting based on data on constant and new data, classify & predict (Perceiving, Learning)
    3. Cons: Cannot apply knowledge into multiple directions, not as reliable individually as they are reliable statistically --> prone to mistakes due to working based on probability,  (Abstracting, Reasoning)
    4. Ex: Face and Voice recognition, network flows
  3. Third Wave (Future, Contextual Adaptation)
    1. A responsive system to help understand decision-making trained through data inputs or examples
Key to Pros & Cons:
  • Perceive: Observing and understanding the outside world 
  • Learning: Constant gain and incorporation of knowledge
  • Abstracting: Using acquired knowledge at different levels of a problem
  • Reasoning: working through facts, logical reasoning

A DARPA Perspective on AI


Defense Advanced Research Projects Agency (DARPA) is one of the main driving force of new technology in the US. They had hosted the Grand Challenge in 2004 and 2005 to accelerate the breakthrough of driver-less car technology. Here is a concise and in-depth analysis of AI technology from DARPA. Using perceiving, learning, abstracting and reasoning as measurement, John Launchbury, the Director of DARPA's Information Innovation Office (I2O), attempts to demystify AI- what it can do, what it can't do, and where it is headed. Let's learn from his analysis of the "three waves of AI". You are encouraged to take notes and share it with us on blog.


Saturday, March 4, 2017

Machine Learning Seminar I: Introduction to Machine Learning

It's great to see some of you posting your results of the Perceptron Challenge onto our blog! It's an amazing achievement for picking up a new tech skill so fast! I encourage the rest of you continue posting your results if you haven't done so.

Before we diving into a very specific Machine Learning (ML) technique called Multi-Layer Perceptron (MLP), it is reasonable to first broaden our general knowledge and perspective of Machine Learning. We are going to hold a Machine Learning Seminar to fulfill this purpose. Group members are welcome to choose specific topics in ML and presents them in our seminar. You will go deeper in certain topic(s), and we are going to learn from each other. Please use the following link to sign up for the topic(s) you are going to present. You can pair with another group member.

The details of the seminar is listed below:
  • Date: March 13, 2017 (Monday)
  • Time: 4:00 pm - 6:00 pm
  • Length: 10 minutes for each topic including Q&A
  • Content: Explaining a specific ML technique
  • Format: PowerPoint or Google Slides
  • Audience: AI Research Group + invited friends
You may use other online resource to help you prepare for the presentation. You may explain both the theoretical part (algorithm) and application part of that ML technique (run through an example). If you have any questions, please feel free to contact me.

Intuitive AI and Augmented Age

How AI can impact our lives? Listen to futurist Maurice Conti painting the future of AI - Augmented Age for us. Several innovative ideas has been presented in this TED Talk: generative AI, intuitive AI, human-robot augmentation, product nerve system, etc. Sit back and enjoy this inspiring intellectual journey! 


Tuesday, February 28, 2017

                                             Perceptron Challenge By: Colleen McGuckin

              After our first meeting, we learned the basics of perceptrons. This challenge had us manipulate the original code from our first meeting, to make it so that the code had three features, could handle three dimensional data, and different numbers of training and testing data. This inherently led to our creating of 3D graphs, as depicted above. This first challenge was very fun, and I guarantee you will have fun completing it.

Perceptron Code Challenge - Alyssa Koo

After learning the basics of the perceptron concepts a few weeks ago, we put our skills to the test and used a new set of data, and was able to create a 3D graph that displayed our data, to the equation that we got from utilizing the training data. The code was able to pick up the pattern, and create an equation that was the best line of fit for the given data. 



Perceptron Code Challenge - Emma Ruzicka

This was fun!!! We trained and tested a set of data with three different variables. Thus, naturally, plotting it on a 3D graph would be the most efficient and visually understandable. We plotted the testing data to see the patterns and it seems to have worked pretty well! :)
The key was simply to adjust the code to be able to include three types of data: 3D Perceptron Code


Tuesday, February 14, 2017

Machine Learning Startups and Applications

Machine Learning is no longer a fancy sci-fi dream or an obscure research field in the dark corner of Google, Facebook, Uber and Baidu have acquired most of the dominant players in this space to improve their services. The most well-known one is that Google spent over $500 million for a little-known London startup called DeepMind in 2014! Many small startups are trying eagerly to find a niche in this new game field. Since machine learning can have huge impacts to various fields in our society, we would like to explore its real-world applications while we are picking up the technical know-how. I am attaching the links of two online articles on this topic. I am looking for some volunteers to summarize the info in these articles into excel/spreadsheet formats, and share the links of them on our blog. These can be working documents that we can keep adding new information. This efforts will help us proceed to the next stage of research. At the same time, you can start thinking about what types of real-world problems in your life can potentially be solved by machine learning techniques. You can share your thoughts with us on blog. 
the ivory tower. It is currently a highly competitive research field for both academia and industry. Industry giants such as

Monday, February 13, 2017

AI Workshop I and Perceptron Challenge

Girls hanging out after the AI workshop I
WORKSHOP 
 
During the AI Workshop I today, nine students including most of our research group members have been trained to understand perceptron from concepts to code-level implementation (in C language)! Thanks to Elro seniors, Qin Ying and Tiffany, presented the key perceptron concepts to us! It is our first contact with one of the most important machine learning technique- Artificial Neural Network (ANN). The perceptron is an algorithm for supervised learning. It is a type of linear classifier which can classify linearly separable data. This basic training will form the foundation for the next-level neural network- Multi-layer Perceptron (MLP).

In case you missed the workshop, or you want to review the perceptron concept in more details, you can watch the following video tutorials:

CHALLENGE

As you finish the perceptron training, you have a new challenge to tackle - modifying the
perceptron code to classify a new data set. The information of the data set is listed in the following:
  1. No.of classes: 2 (class 0 and 1)
  2. No. of training data: 2000
  3. No. of testing data: 400
  4. No. of features: 3 (feature vector is in 3-dimensional space)
  5. Files: training.txt, and testing.txt
Since the original perceptron is only for 2-dimensional data, you need to identify and modify all the dimension-related code to make it work. After you finishing the classification, visualize your training and testing data, and the decision boundary using Grapher in Mac laptop (see example). Our deadline for this challenge is March 1, 2017 (Wednesday). You can post your results onto the blog once you finish the project. Please feel free to drop by my room (501) for any questions. If you need computer resource to work on it during the break, please contact me in advance.

Sunday, February 12, 2017

Workshop Material: Perceptron Training and Classifying Code

Please click the perceptron code link to download the code that includes training and classification.
There are two project folders in the zip file:
  1. PerceptronTrainer: Perform training based on training data in training.txt and save the trained weight at weights.txt.
  2. PerceptronClassifier: Perform classification based on the weights.txt and output the result to classification.txt.
You have to set the schemes (working directories) of both projects to the PerceptronTrainer folder where the weights.txt file is located. So, they can share the file at the same location.

Wednesday, February 8, 2017

AI Workshop 1: Artificial Neural Network - Perceptrons

Hi All,

We are planning to have our first AI workshop some time next week. The goals are learning the basics of Neural Network. The topics include:
  • Neurons & Neural Network
  • Artificial Neurons
  • Basic operations of Perceptron
  • Decision Boundary
  • Learning Rules of Perceptron
  • Limitations of Perceptron
During the workshop, we are going to implement a Single-Layer Perceptron (SLP) in C language, and see how it help us solve real-world problems. The workshop will be ~2 hours long. You are welcome to invite your friends.

In order to facilitate the meeting, please use the survey form to indicate your availability. I will let you know the date as soon as the group fill it out. We will make the final decision based on the survey results.

Introduction to Machine Learning Course

I just found this Machine Learning (ML) Course, Introduction to Machine Learning, on YouTube! It is a course taught by Professor Alex Ihler at University of California, Irvine. I watched a few tutorials and they seem not too hard for high school students to understand. There are 35 short video tutorials (most of them < 20 minutes) covering wide range topics in ML. I suggest that you can watch 2-3 tutorials every week to gradually build up the foundation in ML. We are going to zoom into a very specific field of ML called "Neural Network" in the next few weeks. Following through this course can give you a broader foundation and perspective for ML. You are encouraged to post your notes and questions when you watch these videos. Enjoy!

Saturday, February 4, 2017

Notes: Machine Learning

30 minutes of continuous information is hard to absorb... So, to summarize "A Friendly Introduction to Machine Learning", here are the definitions:
Feel free to read these as you watch the video, they go in order as they are explained in the video and may help define what is going on as it is being visually represented.

  • Machine Learning: The ability of a computer to learn like a human, from experience. For computers, however, this experience, is substituted by data that it can analyze, store, and compare. 
    • Linear Regression: Comparison of data on a coordinate plane; Data points are marked on a graph and a line that best fits the data is derived, so that it has the least amount of error. The error is measured by the distance of all the points from the line: The greater the sum of the distances, the greater the error; This method also works with polynomials, parabolas, etc. 
      • Gradient Descent: an algorithm used to minimize functions; A continuous procedure that adjusts a function, in order to provide the most optimal result or decrease the error to find its minimum value; Uses calculus!
    • Naive Bayes: Providing solutions based on probability; Probable characteristics for a desired output are evaluated. The inputs with the most of these characteristics are the first to be considered.
    • Decision Trees: Comparison of data based on a table; Using multiple features to split the data continuously to narrow it down to individual users/outputs. (You can think of it as a series of if statements that keep splitting the [remaining] data into two categories for x features or characteristics)
    • Logistic Regression: Comparison of data on a coordinate grid; Looking for previous trends to divide data. Like Linear Regression, a line is used to divide the points. Except, the error is measured by the number of points that are wrongly classified by the division of the graph with the line, based on a given condition (ex.: pass or fail).
      • Gradient Descent is, again, used to minimize the error.
    • Neural Networks: Comparison of data on a coordinate grid; Using multiple lines to split the data based on multiple conditions. Like an and-statement, the data that fits all conditions is our output; A combination of multiple logistic regression graphs.
    • Support Vector Machines: Finding a line that maximizes the distance from boundary points through linear optimization; Separating the data into two sections and split it evenly between them. (Think of this kind of like finding the middle of a middle).
      • Kernel Trick: Helps support Vector Machines; Finding a function that splits the data based on certain set mathematical properties and similarity between the data points; As a mapping function, it can often be perceived as working in a 3D space. The additional dimension serves as a way to split the data accordingly. 
    • Clustering: Grouping data points based on proximity until a distance limit is met; Essentially, this means grouping data points with similar values together to classify the data. By determining the limit between the differences of the values, one can manipulate how many or how large the groups are. (hierarchical or K-means)

Thursday, February 2, 2017

Tutorial: A Friendly Introduction to Machine Learning

I just found this nice Machine Learning introductory video. It explains and illustrates a few most popular machine learning techniques in an extremely friendly way! The topics include Linear and Polynomial Regression, Gradient Descent,  Naive Bayes, Decision Tree, Logistic Regression, Neural Network, Support Vector Machine and Kernel Trick. This video tutorial can help you grasp the basic concepts and possible applications of machine learning quickly. You are encouraged to go through it a few times and take notes as you watch, and find further resource online to go deeper in any topics interested you. You can share your notes and finding with us by posting onto this blog. The expected time to finish this tutorial is by February 5 (Sunday).

World's Youngest IBM Watson Programmer

It is fun and informative to watch this young gentleman explaining his Natural Language Q&A (NLQA) system. Enjoy it!

Thursday, January 19, 2017

Home-Made Self-Driving Car

Young hacker George Hotz (alias geohot, known for unlocking the iPhone) built a self-driving car in just a couple of months using machine learning technology in his own garage. He formed a company called Comma.ai to create a cheap ($999) driver assistance product. Even though Hotz canceled the product after the National Highway Traffic Safety Administration (NHTSA) showed their concern. It is still a very interesting and fascinating technical achievement!   


Second Industrial Revolution

The founding executive editor of Wired magazine, Kevin Kelly, shared his observation and vision about AI in this inspiring TED Talks- "How AI can bring on a second Industrial Revolution". He analyzed the features about the AI development and predicted the inevitable trends of AI technology. His talk identifies the upcoming challenges and opportunities we are going to encounter in the next 20 years! 

Wednesday, January 11, 2017

Welcome to the Artificial Intelligence Research Group!

Welcome to the Artificial Intelligence Research Group! The goals of the group are to tap into the cutting-edge academic & commercial resources of Artificial Intelligence (AI), follow up the development & trends of the new technologies, acquire the theoretical background and practical skills of machine learning, and apply machine learning algorithms to solve real-world problems. We are also looking forward to participating in some external STEM competitions and activities.

Most of the early-stage research activities will be focusing on the learning curve of the cutting-edge AI technology. Hands-on mini-projects will be used to help you moving along the learning curve. It will include 
  • basic concepts of AI (definition, background, scope, structure, methods & applications)
  • basic neural networks and learning algorithms,
  • programming languages & machine learning libraries, 
  • deep learning algorithms, and
  • brainstorming the potential real-world applications.
The second-stage research activities will be focusing on a few challenging research projects . We will divide our group into a few teams (depends on the size of the group) to tackle each challenge. These projects may lead to competitions (such as Google Science Fair), poster presentations in professional conferences (such as IEEE ISEC), publishing of research papers in research journals (such as NHSJS), class projects in Advanced STEM Research (see previous STEM course blog), filing patents or commercial applications. Only sky will be the limit!

Most of our "meetings" will be online through the blogs or any electronic media, so you will be invited to be one of the authors of the Artificial Intelligence Research Group blog (http://stem-ai.blogspot.com/). You are encouraged to contribute to the blog frequently by posting and commenting on new development, video links, programming tutorials, study notes, and research results. You should also visit the blog frequently for information, activities or assignments. The blog will be our main playground for collaboration. If you haven't received the invitation, please let me know as soon as possible. We will also arrange some "meet-up time" and "workshops" at school at everyone's convenience. Our first "kick-off meeting will be at 3:00 pm next Wednesday (01/18/2017). Feel free to invite your friends to explore and join the group. Bring with you your dreams, ideas, and questions!

In the mean time, please enjoy the videos I have posted below, and share your thoughts or post your notes. These videos will provide you the basic concepts of artificial intelligence (or machine learning).
 How to make a neural network in your bedroom

 How Does Deep Learning Work?

If you come across any good video, please share with us. I hope to meet you all in our kick-off meeting! 

Happy Researching!