In this notebook, we'll go through some of the manifold learning and network graphing features of the SliceMatrix-IO python client. To begin, let's fire up a client in our California data center. Remember to substitute your api key in the code below.

Don't have a key yet? Get your api key here

In [1]:

```
from slicematrixIO import SliceMatrix
api_key = "insert your api key here"
sm = SliceMatrix(api_key, region = "us-west-1")
```

Manifold learning is based on a powerful yet simple assumption about the true nature of data.

Data points live approximately on a manifold of much lower dimension than the input space

Put simply: in any dataset of high enough dimension, the number of factors that are truly driving the dataset is far less than the actual number of input features. This can occur when high-dimensional data points are being generated by some process that may be difficult to understand directly from the input space, but which only has a few latent factors that control the v ariation in the data

Students of financial markets might be familiar with this concept through other names such as Fama-French, Factor Analysis, and Principal Components Analysis, to name a few. In all three of these algorithms, the practitioner views the chaos of the financial markets as a combination of a few factors. In the case of the first two models, we might judge that only a stock's individual returns might be modeled by just a few factors such as market capitalization and earnings per share, for example.

PCA on the other hand attempts to construct these factors from the data itself. It might come as a surprise to some to know that PCA is one of the simplest forms of manifold learning, but it has many limitations. The most important of which is that PCA assumes the data live on a linear manifold. While PCA is great for discovering these kinds of relationships, in the real world data is hardly so well behaved.

Luckily for us there are many more complex manifold learning algorithms to chose from. We'll explore one of the most powerful algorithms today: **the Isomap**.

This algorithm is good for discovering both linear and non-linear relationships within your datasets. Additionally, the Isomap is based on a powerful graphing algorithm so they are perfect for 1) classification, 2) regression, and 3) visualization of data. In particular, we'll use the isomap to visualize the price structure within the S&P 500 over the last year.

In [2]:

```
%matplotlib inline
import pandas as pd
from pandas_datareader import data as web
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
```

In [3]:

```
data = pd.read_csv("https://s3.amazonaws.com/static.quandl.com/tickers/nasdaq100.csv")
```

In [4]:

```
start = dt.datetime(2016, 3, 7)
end = dt.datetime(2017, 3, 7)
volume = []
closes = []
good_tickers = []
for ticker in data['ticker'].values.tolist():
print ticker,
try:
vdata = web.DataReader(ticker, 'yahoo', start, end)
cdata = vdata[['Close']]
closes.append(cdata)
vdata = vdata[['Volume']]
volume.append(vdata)
good_tickers.append(ticker)
except:
print "x",
```

In [5]:

```
closes = pd.concat(closes, axis = 1)
closes.columns = good_tickers
```

Now we'll take log-differences of the price data:

In [6]:

```
diffs = np.log(closes).diff().dropna(axis = 0, how = "all").dropna(axis = 1, how = "any")
diffs.head()
```

Out[6]:

Now let's create an Isomap model to visualize the herding structure of the market:

In [7]:

```
iso = sm.Isomap(dataset = diffs, D = 2, K = 6)
```

In [8]:

```
from slicematrixIO.notebook import GraphEngine
viz = GraphEngine(sm)
```

In [9]:

```
viz.init_style()
```

Out[9]:

In [10]:

```
viz.init_data()
```

Out[10]:

And then the fun part...

In [12]:

```
viz.drawNetworkGraph(iso, width = 1000, height = 500, min_node_size = 10, charge = -250, color_map = "Winter", color_axis = "pagerank", graph_layout = "force", label_color = "#fff", graph_style = "dark")
```

Out[12]:

In the graph above we used the *drawNetworkGraph* function to color the nodes according to the pagerank of each node. If this sounds familiar, its because it is the original algorithm that Sergey and Larry used, a derivative of which we use everytime we type a query into our favorite search engine.

In much the same way as search engines rank web-pages by analyzing the hyperlink structure, we can assign importance to each stock symbol based on how it is connected to the rest of the graph. Just like with web-pages, social networks, and many other natural phenomena, the stock market is characterized by a graph structure where some symbols are what we might call **hyperconnectors** -- i.e. stocks that influence a large number of other symbols -- while others sit at the **periphery** of the graph and share few connections to their neighbors.

You can access these factors directly using the *rankNodes* function in your Isomap objects e.g.

In [13]:

```
iso.rankNodes("pagerank").plot()
```

Out[13]:

In [14]:

```
iso1 = sm.Isomap(dataset = diffs.ix[0:(diffs.shape[0]/2)], D = 2, K = 6)
iso2 = sm.Isomap(dataset = diffs.ix[(diffs.shape[0]/2):], D = 2, K = 6)
```

The *neighborhood* function reveals the assets most similar to a target symbol, in this case SPY:

In [15]:

```
iso1.neighborhood("AAPL")
```

Out[15]:

In [16]:

```
iso2.neighborhood("AAPL")
```

Out[16]:

Note the changes in the composition of AAPL's neighborhood over time.

We can also look at the changes in the global structure of the graph over time through visualization. We can measure the stability of the price correlation structure by examining how the connections within the graph evolves over time:

In [17]:

```
viz.drawNetworkGraph(iso1, height = 500, min_node_size = 10, charge = -250, color_map = "Heat", graph_style = "dark", label_color = "rgba(255, 255, 255, 0.8)")
```

Out[17]:

In [18]:

```
viz.drawNetworkGraph(iso2, height = 500, min_node_size = 10, charge = -250, color_map = "Heat", graph_style = "dark", label_color = "rgba(255, 255, 255, 0.8)")
```

Out[18]:

Don't have a SliceMatrix-IO api key yet? Get your api key here